Test Report: KVM_Linux_crio 19336

                    
                      86221fe19cf32e1f04d47d4acd0a12df0852414c:2024-07-29:35546
                    
                

Test fail (11/216)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-693556 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-693556 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.957399592s)

                                                
                                                
-- stdout --
	* [addons-693556] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-693556" primary control-plane node in "addons-693556" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image docker.io/registry:2.8.3
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying ingress addon...
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-693556 service yakd-dashboard -n yakd-dashboard
	
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-693556 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, cloud-spanner, helm-tiller, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:04.377685  121842 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:04.377787  121842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:04.377800  121842 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:04.377804  121842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:04.378008  121842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 10:46:04.378590  121842 out.go:298] Setting JSON to false
	I0729 10:46:04.379474  121842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1715,"bootTime":1722248249,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:46:04.379533  121842 start.go:139] virtualization: kvm guest
	I0729 10:46:04.381654  121842 out.go:177] * [addons-693556] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:46:04.382944  121842 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 10:46:04.382972  121842 notify.go:220] Checking for updates...
	I0729 10:46:04.385127  121842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:46:04.386334  121842 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 10:46:04.387453  121842 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 10:46:04.388503  121842 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 10:46:04.389683  121842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:46:04.390916  121842 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:46:04.423045  121842 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 10:46:04.424300  121842 start.go:297] selected driver: kvm2
	I0729 10:46:04.424312  121842 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:46:04.424348  121842 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:46:04.425154  121842 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:04.425249  121842 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:46:04.440107  121842 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:46:04.440161  121842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:46:04.440404  121842 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:46:04.440469  121842 cni.go:84] Creating CNI manager for ""
	I0729 10:46:04.440486  121842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:46:04.440498  121842 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:46:04.440572  121842 start.go:340] cluster config:
	{Name:addons-693556 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-693556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:04.440683  121842 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:46:04.442475  121842 out.go:177] * Starting "addons-693556" primary control-plane node in "addons-693556" cluster
	I0729 10:46:04.443792  121842 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:46:04.443823  121842 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:46:04.443845  121842 cache.go:56] Caching tarball of preloaded images
	I0729 10:46:04.443935  121842 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 10:46:04.443949  121842 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:46:04.444312  121842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/config.json ...
	I0729 10:46:04.444339  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/config.json: {Name:mk7a16399f53d0ff5586201b8de2209f09e124b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:04.444517  121842 start.go:360] acquireMachinesLock for addons-693556: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 10:46:04.444588  121842 start.go:364] duration metric: took 51.936µs to acquireMachinesLock for "addons-693556"
	I0729 10:46:04.444610  121842 start.go:93] Provisioning new machine with config: &{Name:addons-693556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-693556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:46:04.444685  121842 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 10:46:04.447303  121842 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 10:46:04.447431  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:04.447478  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:04.462310  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I0729 10:46:04.462780  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:04.463388  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:04.463411  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:04.463778  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:04.463980  121842 main.go:141] libmachine: (addons-693556) Calling .GetMachineName
	I0729 10:46:04.464125  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:04.464279  121842 start.go:159] libmachine.API.Create for "addons-693556" (driver="kvm2")
	I0729 10:46:04.464301  121842 client.go:168] LocalClient.Create starting
	I0729 10:46:04.464331  121842 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 10:46:04.518280  121842 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 10:46:04.567434  121842 main.go:141] libmachine: Running pre-create checks...
	I0729 10:46:04.567457  121842 main.go:141] libmachine: (addons-693556) Calling .PreCreateCheck
	I0729 10:46:04.567960  121842 main.go:141] libmachine: (addons-693556) Calling .GetConfigRaw
	I0729 10:46:04.568393  121842 main.go:141] libmachine: Creating machine...
	I0729 10:46:04.568410  121842 main.go:141] libmachine: (addons-693556) Calling .Create
	I0729 10:46:04.568533  121842 main.go:141] libmachine: (addons-693556) Creating KVM machine...
	I0729 10:46:04.569679  121842 main.go:141] libmachine: (addons-693556) DBG | found existing default KVM network
	I0729 10:46:04.570443  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:04.570308  121864 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000127ad0}
	I0729 10:46:04.570475  121842 main.go:141] libmachine: (addons-693556) DBG | created network xml: 
	I0729 10:46:04.570492  121842 main.go:141] libmachine: (addons-693556) DBG | <network>
	I0729 10:46:04.570501  121842 main.go:141] libmachine: (addons-693556) DBG |   <name>mk-addons-693556</name>
	I0729 10:46:04.570509  121842 main.go:141] libmachine: (addons-693556) DBG |   <dns enable='no'/>
	I0729 10:46:04.570516  121842 main.go:141] libmachine: (addons-693556) DBG |   
	I0729 10:46:04.570537  121842 main.go:141] libmachine: (addons-693556) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 10:46:04.570545  121842 main.go:141] libmachine: (addons-693556) DBG |     <dhcp>
	I0729 10:46:04.570552  121842 main.go:141] libmachine: (addons-693556) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 10:46:04.570562  121842 main.go:141] libmachine: (addons-693556) DBG |     </dhcp>
	I0729 10:46:04.570571  121842 main.go:141] libmachine: (addons-693556) DBG |   </ip>
	I0729 10:46:04.570583  121842 main.go:141] libmachine: (addons-693556) DBG |   
	I0729 10:46:04.570592  121842 main.go:141] libmachine: (addons-693556) DBG | </network>
	I0729 10:46:04.570604  121842 main.go:141] libmachine: (addons-693556) DBG | 
	I0729 10:46:04.576047  121842 main.go:141] libmachine: (addons-693556) DBG | trying to create private KVM network mk-addons-693556 192.168.39.0/24...
	I0729 10:46:04.640042  121842 main.go:141] libmachine: (addons-693556) DBG | private KVM network mk-addons-693556 192.168.39.0/24 created
	I0729 10:46:04.640067  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:04.639994  121864 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 10:46:04.640084  121842 main.go:141] libmachine: (addons-693556) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556 ...
	I0729 10:46:04.640100  121842 main.go:141] libmachine: (addons-693556) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:46:04.640180  121842 main.go:141] libmachine: (addons-693556) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 10:46:04.886339  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:04.886198  121864 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa...
	I0729 10:46:04.941630  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:04.941503  121864 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/addons-693556.rawdisk...
	I0729 10:46:04.941663  121842 main.go:141] libmachine: (addons-693556) DBG | Writing magic tar header
	I0729 10:46:04.941678  121842 main.go:141] libmachine: (addons-693556) DBG | Writing SSH key tar header
	I0729 10:46:04.941847  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:04.941675  121864 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556 ...
	I0729 10:46:04.941920  121842 main.go:141] libmachine: (addons-693556) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556 (perms=drwx------)
	I0729 10:46:04.941942  121842 main.go:141] libmachine: (addons-693556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556
	I0729 10:46:04.941967  121842 main.go:141] libmachine: (addons-693556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 10:46:04.941978  121842 main.go:141] libmachine: (addons-693556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 10:46:04.941996  121842 main.go:141] libmachine: (addons-693556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 10:46:04.942016  121842 main.go:141] libmachine: (addons-693556) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 10:46:04.942027  121842 main.go:141] libmachine: (addons-693556) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 10:46:04.942042  121842 main.go:141] libmachine: (addons-693556) DBG | Checking permissions on dir: /home/jenkins
	I0729 10:46:04.942052  121842 main.go:141] libmachine: (addons-693556) DBG | Checking permissions on dir: /home
	I0729 10:46:04.942066  121842 main.go:141] libmachine: (addons-693556) DBG | Skipping /home - not owner
	I0729 10:46:04.942080  121842 main.go:141] libmachine: (addons-693556) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 10:46:04.942097  121842 main.go:141] libmachine: (addons-693556) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 10:46:04.942109  121842 main.go:141] libmachine: (addons-693556) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 10:46:04.942118  121842 main.go:141] libmachine: (addons-693556) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 10:46:04.942133  121842 main.go:141] libmachine: (addons-693556) Creating domain...
	I0729 10:46:04.942963  121842 main.go:141] libmachine: (addons-693556) define libvirt domain using xml: 
	I0729 10:46:04.942996  121842 main.go:141] libmachine: (addons-693556) <domain type='kvm'>
	I0729 10:46:04.943007  121842 main.go:141] libmachine: (addons-693556)   <name>addons-693556</name>
	I0729 10:46:04.943016  121842 main.go:141] libmachine: (addons-693556)   <memory unit='MiB'>4000</memory>
	I0729 10:46:04.943025  121842 main.go:141] libmachine: (addons-693556)   <vcpu>2</vcpu>
	I0729 10:46:04.943037  121842 main.go:141] libmachine: (addons-693556)   <features>
	I0729 10:46:04.943042  121842 main.go:141] libmachine: (addons-693556)     <acpi/>
	I0729 10:46:04.943046  121842 main.go:141] libmachine: (addons-693556)     <apic/>
	I0729 10:46:04.943051  121842 main.go:141] libmachine: (addons-693556)     <pae/>
	I0729 10:46:04.943057  121842 main.go:141] libmachine: (addons-693556)     
	I0729 10:46:04.943062  121842 main.go:141] libmachine: (addons-693556)   </features>
	I0729 10:46:04.943094  121842 main.go:141] libmachine: (addons-693556)   <cpu mode='host-passthrough'>
	I0729 10:46:04.943106  121842 main.go:141] libmachine: (addons-693556)   
	I0729 10:46:04.943114  121842 main.go:141] libmachine: (addons-693556)   </cpu>
	I0729 10:46:04.943142  121842 main.go:141] libmachine: (addons-693556)   <os>
	I0729 10:46:04.943162  121842 main.go:141] libmachine: (addons-693556)     <type>hvm</type>
	I0729 10:46:04.943168  121842 main.go:141] libmachine: (addons-693556)     <boot dev='cdrom'/>
	I0729 10:46:04.943174  121842 main.go:141] libmachine: (addons-693556)     <boot dev='hd'/>
	I0729 10:46:04.943180  121842 main.go:141] libmachine: (addons-693556)     <bootmenu enable='no'/>
	I0729 10:46:04.943185  121842 main.go:141] libmachine: (addons-693556)   </os>
	I0729 10:46:04.943190  121842 main.go:141] libmachine: (addons-693556)   <devices>
	I0729 10:46:04.943197  121842 main.go:141] libmachine: (addons-693556)     <disk type='file' device='cdrom'>
	I0729 10:46:04.943205  121842 main.go:141] libmachine: (addons-693556)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/boot2docker.iso'/>
	I0729 10:46:04.943212  121842 main.go:141] libmachine: (addons-693556)       <target dev='hdc' bus='scsi'/>
	I0729 10:46:04.943217  121842 main.go:141] libmachine: (addons-693556)       <readonly/>
	I0729 10:46:04.943221  121842 main.go:141] libmachine: (addons-693556)     </disk>
	I0729 10:46:04.943254  121842 main.go:141] libmachine: (addons-693556)     <disk type='file' device='disk'>
	I0729 10:46:04.943273  121842 main.go:141] libmachine: (addons-693556)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 10:46:04.943286  121842 main.go:141] libmachine: (addons-693556)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/addons-693556.rawdisk'/>
	I0729 10:46:04.943293  121842 main.go:141] libmachine: (addons-693556)       <target dev='hda' bus='virtio'/>
	I0729 10:46:04.943301  121842 main.go:141] libmachine: (addons-693556)     </disk>
	I0729 10:46:04.943312  121842 main.go:141] libmachine: (addons-693556)     <interface type='network'>
	I0729 10:46:04.943323  121842 main.go:141] libmachine: (addons-693556)       <source network='mk-addons-693556'/>
	I0729 10:46:04.943333  121842 main.go:141] libmachine: (addons-693556)       <model type='virtio'/>
	I0729 10:46:04.943344  121842 main.go:141] libmachine: (addons-693556)     </interface>
	I0729 10:46:04.943354  121842 main.go:141] libmachine: (addons-693556)     <interface type='network'>
	I0729 10:46:04.943365  121842 main.go:141] libmachine: (addons-693556)       <source network='default'/>
	I0729 10:46:04.943376  121842 main.go:141] libmachine: (addons-693556)       <model type='virtio'/>
	I0729 10:46:04.943395  121842 main.go:141] libmachine: (addons-693556)     </interface>
	I0729 10:46:04.943411  121842 main.go:141] libmachine: (addons-693556)     <serial type='pty'>
	I0729 10:46:04.943423  121842 main.go:141] libmachine: (addons-693556)       <target port='0'/>
	I0729 10:46:04.943433  121842 main.go:141] libmachine: (addons-693556)     </serial>
	I0729 10:46:04.943448  121842 main.go:141] libmachine: (addons-693556)     <console type='pty'>
	I0729 10:46:04.943456  121842 main.go:141] libmachine: (addons-693556)       <target type='serial' port='0'/>
	I0729 10:46:04.943462  121842 main.go:141] libmachine: (addons-693556)     </console>
	I0729 10:46:04.943467  121842 main.go:141] libmachine: (addons-693556)     <rng model='virtio'>
	I0729 10:46:04.943475  121842 main.go:141] libmachine: (addons-693556)       <backend model='random'>/dev/random</backend>
	I0729 10:46:04.943484  121842 main.go:141] libmachine: (addons-693556)     </rng>
	I0729 10:46:04.943494  121842 main.go:141] libmachine: (addons-693556)     
	I0729 10:46:04.943502  121842 main.go:141] libmachine: (addons-693556)     
	I0729 10:46:04.943516  121842 main.go:141] libmachine: (addons-693556)   </devices>
	I0729 10:46:04.943536  121842 main.go:141] libmachine: (addons-693556) </domain>
	I0729 10:46:04.943550  121842 main.go:141] libmachine: (addons-693556) 
	I0729 10:46:04.949287  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:2c:46:7c in network default
	I0729 10:46:04.949778  121842 main.go:141] libmachine: (addons-693556) Ensuring networks are active...
	I0729 10:46:04.949802  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:04.950479  121842 main.go:141] libmachine: (addons-693556) Ensuring network default is active
	I0729 10:46:04.950783  121842 main.go:141] libmachine: (addons-693556) Ensuring network mk-addons-693556 is active
	I0729 10:46:04.951221  121842 main.go:141] libmachine: (addons-693556) Getting domain xml...
	I0729 10:46:04.951827  121842 main.go:141] libmachine: (addons-693556) Creating domain...
	I0729 10:46:06.326662  121842 main.go:141] libmachine: (addons-693556) Waiting to get IP...
	I0729 10:46:06.327571  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:06.328002  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:06.328025  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:06.327964  121864 retry.go:31] will retry after 290.472201ms: waiting for machine to come up
	I0729 10:46:06.620557  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:06.621003  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:06.621042  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:06.620943  121864 retry.go:31] will retry after 322.884382ms: waiting for machine to come up
	I0729 10:46:06.945404  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:06.945867  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:06.945894  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:06.945809  121864 retry.go:31] will retry after 410.52966ms: waiting for machine to come up
	I0729 10:46:07.358473  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:07.358905  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:07.358933  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:07.358827  121864 retry.go:31] will retry after 388.348382ms: waiting for machine to come up
	I0729 10:46:07.748362  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:07.748817  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:07.748843  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:07.748774  121864 retry.go:31] will retry after 642.454197ms: waiting for machine to come up
	I0729 10:46:08.392520  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:08.392956  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:08.393007  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:08.392888  121864 retry.go:31] will retry after 657.673097ms: waiting for machine to come up
	I0729 10:46:09.051657  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:09.052114  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:09.052141  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:09.052068  121864 retry.go:31] will retry after 769.147856ms: waiting for machine to come up
	I0729 10:46:09.822733  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:09.823151  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:09.823182  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:09.823124  121864 retry.go:31] will retry after 900.533156ms: waiting for machine to come up
	I0729 10:46:10.725388  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:10.725789  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:10.725815  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:10.725753  121864 retry.go:31] will retry after 1.400117521s: waiting for machine to come up
	I0729 10:46:12.128320  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:12.128709  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:12.128741  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:12.128639  121864 retry.go:31] will retry after 1.460488829s: waiting for machine to come up
	I0729 10:46:13.591362  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:13.591891  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:13.591914  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:13.591857  121864 retry.go:31] will retry after 2.705538301s: waiting for machine to come up
	I0729 10:46:16.299231  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:16.299600  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:16.299625  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:16.299555  121864 retry.go:31] will retry after 2.313944166s: waiting for machine to come up
	I0729 10:46:18.616053  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:18.616434  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:18.616457  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:18.616397  121864 retry.go:31] will retry after 3.012461167s: waiting for machine to come up
	I0729 10:46:21.630032  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:21.630402  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find current IP address of domain addons-693556 in network mk-addons-693556
	I0729 10:46:21.630423  121842 main.go:141] libmachine: (addons-693556) DBG | I0729 10:46:21.630363  121864 retry.go:31] will retry after 3.95449015s: waiting for machine to come up
	I0729 10:46:25.586382  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:25.586712  121842 main.go:141] libmachine: (addons-693556) Found IP for machine: 192.168.39.32
	I0729 10:46:25.586737  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has current primary IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:25.586745  121842 main.go:141] libmachine: (addons-693556) Reserving static IP address...
	I0729 10:46:25.587154  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find host DHCP lease matching {name: "addons-693556", mac: "52:54:00:c4:e5:c9", ip: "192.168.39.32"} in network mk-addons-693556
	I0729 10:46:25.658954  121842 main.go:141] libmachine: (addons-693556) DBG | Getting to WaitForSSH function...
	I0729 10:46:25.658982  121842 main.go:141] libmachine: (addons-693556) Reserved static IP address: 192.168.39.32
	I0729 10:46:25.658995  121842 main.go:141] libmachine: (addons-693556) Waiting for SSH to be available...
	I0729 10:46:25.661915  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:25.662231  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556
	I0729 10:46:25.662259  121842 main.go:141] libmachine: (addons-693556) DBG | unable to find defined IP address of network mk-addons-693556 interface with MAC address 52:54:00:c4:e5:c9
	I0729 10:46:25.662467  121842 main.go:141] libmachine: (addons-693556) DBG | Using SSH client type: external
	I0729 10:46:25.662490  121842 main.go:141] libmachine: (addons-693556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa (-rw-------)
	I0729 10:46:25.662558  121842 main.go:141] libmachine: (addons-693556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:46:25.662578  121842 main.go:141] libmachine: (addons-693556) DBG | About to run SSH command:
	I0729 10:46:25.662606  121842 main.go:141] libmachine: (addons-693556) DBG | exit 0
	I0729 10:46:25.674297  121842 main.go:141] libmachine: (addons-693556) DBG | SSH cmd err, output: exit status 255: 
	I0729 10:46:25.674327  121842 main.go:141] libmachine: (addons-693556) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 10:46:25.674338  121842 main.go:141] libmachine: (addons-693556) DBG | command : exit 0
	I0729 10:46:25.674344  121842 main.go:141] libmachine: (addons-693556) DBG | err     : exit status 255
	I0729 10:46:25.674355  121842 main.go:141] libmachine: (addons-693556) DBG | output  : 
	I0729 10:46:28.676480  121842 main.go:141] libmachine: (addons-693556) DBG | Getting to WaitForSSH function...
	I0729 10:46:28.679187  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:28.679656  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:28.679687  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:28.679809  121842 main.go:141] libmachine: (addons-693556) DBG | Using SSH client type: external
	I0729 10:46:28.679838  121842 main.go:141] libmachine: (addons-693556) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa (-rw-------)
	I0729 10:46:28.679885  121842 main.go:141] libmachine: (addons-693556) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 10:46:28.679907  121842 main.go:141] libmachine: (addons-693556) DBG | About to run SSH command:
	I0729 10:46:28.679922  121842 main.go:141] libmachine: (addons-693556) DBG | exit 0
	I0729 10:46:28.801019  121842 main.go:141] libmachine: (addons-693556) DBG | SSH cmd err, output: <nil>: 
	I0729 10:46:28.801300  121842 main.go:141] libmachine: (addons-693556) KVM machine creation complete!
	I0729 10:46:28.801617  121842 main.go:141] libmachine: (addons-693556) Calling .GetConfigRaw
	I0729 10:46:28.802176  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:28.802379  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:28.802544  121842 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 10:46:28.802559  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:28.803985  121842 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 10:46:28.804011  121842 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 10:46:28.804019  121842 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 10:46:28.804035  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:28.806423  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:28.806828  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:28.806852  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:28.806985  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:28.807211  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:28.807379  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:28.807524  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:28.807699  121842 main.go:141] libmachine: Using SSH client type: native
	I0729 10:46:28.807951  121842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0729 10:46:28.807966  121842 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 10:46:28.908235  121842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:46:28.908260  121842 main.go:141] libmachine: Detecting the provisioner...
	I0729 10:46:28.908267  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:28.911184  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:28.911545  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:28.911574  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:28.911703  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:28.911906  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:28.912057  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:28.912173  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:28.912384  121842 main.go:141] libmachine: Using SSH client type: native
	I0729 10:46:28.912564  121842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0729 10:46:28.912575  121842 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 10:46:29.013569  121842 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 10:46:29.013690  121842 main.go:141] libmachine: found compatible host: buildroot
	I0729 10:46:29.013703  121842 main.go:141] libmachine: Provisioning with buildroot...
	I0729 10:46:29.013713  121842 main.go:141] libmachine: (addons-693556) Calling .GetMachineName
	I0729 10:46:29.013963  121842 buildroot.go:166] provisioning hostname "addons-693556"
	I0729 10:46:29.013994  121842 main.go:141] libmachine: (addons-693556) Calling .GetMachineName
	I0729 10:46:29.014198  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:29.016988  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.017349  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.017371  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.017498  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:29.017704  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.017865  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.017963  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:29.018128  121842 main.go:141] libmachine: Using SSH client type: native
	I0729 10:46:29.018318  121842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0729 10:46:29.018331  121842 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-693556 && echo "addons-693556" | sudo tee /etc/hostname
	I0729 10:46:29.129527  121842 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-693556
	
	I0729 10:46:29.129583  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:29.132345  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.132657  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.132694  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.132837  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:29.133083  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.133256  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.133386  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:29.133584  121842 main.go:141] libmachine: Using SSH client type: native
	I0729 10:46:29.133772  121842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0729 10:46:29.133794  121842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-693556' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-693556/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-693556' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:46:29.241198  121842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:46:29.241229  121842 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 10:46:29.241291  121842 buildroot.go:174] setting up certificates
	I0729 10:46:29.241304  121842 provision.go:84] configureAuth start
	I0729 10:46:29.241318  121842 main.go:141] libmachine: (addons-693556) Calling .GetMachineName
	I0729 10:46:29.241660  121842 main.go:141] libmachine: (addons-693556) Calling .GetIP
	I0729 10:46:29.244289  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.244657  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.244684  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.244897  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:29.247133  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.247502  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.247531  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.247690  121842 provision.go:143] copyHostCerts
	I0729 10:46:29.247792  121842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 10:46:29.247979  121842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 10:46:29.248065  121842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 10:46:29.248130  121842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.addons-693556 san=[127.0.0.1 192.168.39.32 addons-693556 localhost minikube]
	I0729 10:46:29.477900  121842 provision.go:177] copyRemoteCerts
	I0729 10:46:29.477970  121842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:46:29.477998  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:29.480470  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.480728  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.480754  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.480905  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:29.481136  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.481309  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:29.481435  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:29.558860  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 10:46:29.584053  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:46:29.607352  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 10:46:29.630251  121842 provision.go:87] duration metric: took 388.929242ms to configureAuth
	I0729 10:46:29.630287  121842 buildroot.go:189] setting minikube options for container-runtime
	I0729 10:46:29.630500  121842 config.go:182] Loaded profile config "addons-693556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:29.630599  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:29.633389  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.633776  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.633812  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.633956  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:29.634222  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.634408  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.634557  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:29.634741  121842 main.go:141] libmachine: Using SSH client type: native
	I0729 10:46:29.634912  121842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0729 10:46:29.634925  121842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 10:46:29.878298  121842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 10:46:29.878333  121842 main.go:141] libmachine: Checking connection to Docker...
	I0729 10:46:29.878342  121842 main.go:141] libmachine: (addons-693556) Calling .GetURL
	I0729 10:46:29.879608  121842 main.go:141] libmachine: (addons-693556) DBG | Using libvirt version 6000000
	I0729 10:46:29.881768  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.882053  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.882082  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.882281  121842 main.go:141] libmachine: Docker is up and running!
	I0729 10:46:29.882304  121842 main.go:141] libmachine: Reticulating splines...
	I0729 10:46:29.882313  121842 client.go:171] duration metric: took 25.418004705s to LocalClient.Create
	I0729 10:46:29.882345  121842 start.go:167] duration metric: took 25.41806652s to libmachine.API.Create "addons-693556"
	I0729 10:46:29.882357  121842 start.go:293] postStartSetup for "addons-693556" (driver="kvm2")
	I0729 10:46:29.882372  121842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:46:29.882396  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:29.882669  121842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:46:29.882695  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:29.885708  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.886058  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.886086  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.886266  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:29.886473  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.886653  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:29.886793  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:29.967074  121842 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:46:29.970943  121842 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 10:46:29.970970  121842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 10:46:29.971052  121842 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 10:46:29.971079  121842 start.go:296] duration metric: took 88.715603ms for postStartSetup
	I0729 10:46:29.971117  121842 main.go:141] libmachine: (addons-693556) Calling .GetConfigRaw
	I0729 10:46:29.971694  121842 main.go:141] libmachine: (addons-693556) Calling .GetIP
	I0729 10:46:29.974534  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.974865  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.974888  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.975133  121842 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/config.json ...
	I0729 10:46:29.975303  121842 start.go:128] duration metric: took 25.530607323s to createHost
	I0729 10:46:29.975333  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:29.977689  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.978027  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:29.978056  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:29.978216  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:29.978411  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.978581  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:29.978712  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:29.978862  121842 main.go:141] libmachine: Using SSH client type: native
	I0729 10:46:29.979039  121842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0729 10:46:29.979054  121842 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 10:46:30.081188  121842 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722249990.041260922
	
	I0729 10:46:30.081212  121842 fix.go:216] guest clock: 1722249990.041260922
	I0729 10:46:30.081220  121842 fix.go:229] Guest: 2024-07-29 10:46:30.041260922 +0000 UTC Remote: 2024-07-29 10:46:29.975316911 +0000 UTC m=+25.632501263 (delta=65.944011ms)
	I0729 10:46:30.081241  121842 fix.go:200] guest clock delta is within tolerance: 65.944011ms
	I0729 10:46:30.081247  121842 start.go:83] releasing machines lock for "addons-693556", held for 25.636647764s
	I0729 10:46:30.081266  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:30.081513  121842 main.go:141] libmachine: (addons-693556) Calling .GetIP
	I0729 10:46:30.084011  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:30.084378  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:30.084400  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:30.084592  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:30.085143  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:30.085314  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:30.085434  121842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:46:30.085493  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:30.085516  121842 ssh_runner.go:195] Run: cat /version.json
	I0729 10:46:30.085540  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:30.087867  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:30.088032  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:30.088232  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:30.088256  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:30.088412  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:30.088423  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:30.088443  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:30.088617  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:30.088659  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:30.088790  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:30.088824  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:30.088927  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:30.089011  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:30.089087  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:30.181590  121842 ssh_runner.go:195] Run: systemctl --version
	I0729 10:46:30.187404  121842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 10:46:30.353267  121842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 10:46:30.358623  121842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 10:46:30.358694  121842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:46:30.373681  121842 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 10:46:30.373708  121842 start.go:495] detecting cgroup driver to use...
	I0729 10:46:30.373776  121842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 10:46:30.389809  121842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 10:46:30.403681  121842 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:46:30.403736  121842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:46:30.417130  121842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:46:30.430214  121842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:46:30.537538  121842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:46:30.684874  121842 docker.go:233] disabling docker service ...
	I0729 10:46:30.684981  121842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:46:30.698479  121842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:46:30.711226  121842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:46:30.827828  121842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:46:30.935029  121842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:46:30.948983  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:46:30.966214  121842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 10:46:30.966273  121842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:46:30.975976  121842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 10:46:30.976072  121842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:46:30.985953  121842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:46:30.995719  121842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:46:31.005631  121842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:46:31.015303  121842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:46:31.025100  121842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:46:31.041273  121842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 10:46:31.051399  121842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:46:31.060234  121842 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 10:46:31.060297  121842 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 10:46:31.072355  121842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:46:31.081351  121842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:46:31.189107  121842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 10:46:31.315053  121842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 10:46:31.315175  121842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 10:46:31.319805  121842 start.go:563] Will wait 60s for crictl version
	I0729 10:46:31.319882  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:46:31.323350  121842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:46:31.357504  121842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 10:46:31.357634  121842 ssh_runner.go:195] Run: crio --version
	I0729 10:46:31.387570  121842 ssh_runner.go:195] Run: crio --version
	I0729 10:46:31.416915  121842 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 10:46:31.418268  121842 main.go:141] libmachine: (addons-693556) Calling .GetIP
	I0729 10:46:31.420755  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:31.421086  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:31.421121  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:31.421389  121842 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 10:46:31.425218  121842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:46:31.437016  121842 kubeadm.go:883] updating cluster {Name:addons-693556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-693556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:46:31.437148  121842 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:46:31.437209  121842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:46:31.469127  121842 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 10:46:31.469211  121842 ssh_runner.go:195] Run: which lz4
	I0729 10:46:31.472798  121842 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 10:46:31.477066  121842 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 10:46:31.477115  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 10:46:32.717098  121842 crio.go:462] duration metric: took 1.244333257s to copy over tarball
	I0729 10:46:32.717187  121842 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 10:46:34.952529  121842 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.235301395s)
	I0729 10:46:34.952573  121842 crio.go:469] duration metric: took 2.235442451s to extract the tarball
	I0729 10:46:34.952585  121842 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 10:46:34.989618  121842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:46:35.028039  121842 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 10:46:35.028069  121842 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:46:35.028078  121842 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.30.3 crio true true} ...
	I0729 10:46:35.028202  121842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-693556 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-693556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:46:35.028271  121842 ssh_runner.go:195] Run: crio config
	I0729 10:46:35.079955  121842 cni.go:84] Creating CNI manager for ""
	I0729 10:46:35.079984  121842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:46:35.079998  121842 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:46:35.080024  121842 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-693556 NodeName:addons-693556 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:46:35.080189  121842 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-693556"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:46:35.080253  121842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:46:35.089663  121842 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:46:35.089756  121842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:46:35.098708  121842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 10:46:35.114587  121842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:46:35.130877  121842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0729 10:46:35.147069  121842 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0729 10:46:35.150709  121842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:46:35.162134  121842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:46:35.283607  121842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:46:35.300457  121842 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556 for IP: 192.168.39.32
	I0729 10:46:35.300487  121842 certs.go:194] generating shared ca certs ...
	I0729 10:46:35.300517  121842 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.300730  121842 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 10:46:35.527055  121842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt ...
	I0729 10:46:35.527090  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt: {Name:mk2fc53e5615e4e758a257f47e1c039229851171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.527264  121842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key ...
	I0729 10:46:35.527275  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key: {Name:mk18e896ddf5847476cb8cf2fef99843f9704c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.527344  121842 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 10:46:35.659880  121842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt ...
	I0729 10:46:35.659911  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt: {Name:mk8f0f6d705ce886a4cb9b7b81219d396f41ddc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.660069  121842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key ...
	I0729 10:46:35.660080  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key: {Name:mk541df7f31305db36d3c65558fde4ef45c28f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.660145  121842 certs.go:256] generating profile certs ...
	I0729 10:46:35.660203  121842 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/client.key
	I0729 10:46:35.660216  121842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/client.crt with IP's: []
	I0729 10:46:35.783819  121842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/client.crt ...
	I0729 10:46:35.783853  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/client.crt: {Name:mk3aabb3a1f206041640513876852a4a04b562e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.784016  121842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/client.key ...
	I0729 10:46:35.784027  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/client.key: {Name:mka52fd64e9373ed84a325ab79bea76fe21dd480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.784094  121842 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.key.05f14bb5
	I0729 10:46:35.784112  121842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.crt.05f14bb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.32]
	I0729 10:46:35.962685  121842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.crt.05f14bb5 ...
	I0729 10:46:35.962725  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.crt.05f14bb5: {Name:mkebbbd80fef369ee77f8167658f126fc054bf13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.963086  121842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.key.05f14bb5 ...
	I0729 10:46:35.963117  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.key.05f14bb5: {Name:mkf4248d2f7ab301fcca119f58148477ef3fa309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:35.963234  121842 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.crt.05f14bb5 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.crt
	I0729 10:46:35.963347  121842 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.key.05f14bb5 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.key
	I0729 10:46:35.963392  121842 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.key
	I0729 10:46:35.963410  121842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.crt with IP's: []
	I0729 10:46:36.008866  121842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.crt ...
	I0729 10:46:36.008898  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.crt: {Name:mk630a05a4feaedd1c33f2017538edb01b5de132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:36.009101  121842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.key ...
	I0729 10:46:36.009118  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.key: {Name:mk6d6163ea4f37c16ee92c0497e3bca27b04582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:36.009317  121842 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 10:46:36.009352  121842 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 10:46:36.009376  121842 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:46:36.009402  121842 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 10:46:36.009985  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:46:36.037001  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 10:46:36.065062  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:46:36.101954  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 10:46:36.124494  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 10:46:36.146523  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 10:46:36.168881  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:46:36.191214  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/addons-693556/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 10:46:36.213372  121842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:46:36.234884  121842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:46:36.250519  121842 ssh_runner.go:195] Run: openssl version
	I0729 10:46:36.255912  121842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:46:36.266239  121842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:46:36.270405  121842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:46:36.270468  121842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:46:36.275873  121842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:46:36.286159  121842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:46:36.289868  121842 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:46:36.289918  121842 kubeadm.go:392] StartCluster: {Name:addons-693556 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-693556 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:46:36.289989  121842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 10:46:36.290032  121842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:46:36.322948  121842 cri.go:89] found id: ""
	I0729 10:46:36.323025  121842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:46:36.332658  121842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:46:36.345394  121842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:46:36.358274  121842 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:46:36.358301  121842 kubeadm.go:157] found existing configuration files:
	
	I0729 10:46:36.358359  121842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 10:46:36.367205  121842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:46:36.367300  121842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:46:36.376712  121842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 10:46:36.385475  121842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:46:36.385550  121842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:46:36.394837  121842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 10:46:36.403259  121842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:46:36.403336  121842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:46:36.412332  121842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 10:46:36.420901  121842 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:46:36.420999  121842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:46:36.429622  121842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 10:46:36.592028  121842 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:46:46.520347  121842 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 10:46:46.520417  121842 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:46:46.520557  121842 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:46:46.520701  121842 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:46:46.520828  121842 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:46:46.520909  121842 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:46:46.522398  121842 out.go:204]   - Generating certificates and keys ...
	I0729 10:46:46.522494  121842 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:46:46.522600  121842 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:46:46.522714  121842 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 10:46:46.522792  121842 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 10:46:46.522873  121842 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 10:46:46.522959  121842 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 10:46:46.523040  121842 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 10:46:46.523173  121842 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-693556 localhost] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0729 10:46:46.523222  121842 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 10:46:46.523322  121842 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-693556 localhost] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0729 10:46:46.523384  121842 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 10:46:46.523436  121842 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 10:46:46.523473  121842 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 10:46:46.523527  121842 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:46:46.523600  121842 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:46:46.523683  121842 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 10:46:46.523736  121842 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:46:46.523793  121842 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:46:46.523838  121842 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:46:46.523940  121842 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:46:46.524001  121842 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:46:46.525481  121842 out.go:204]   - Booting up control plane ...
	I0729 10:46:46.525566  121842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:46:46.525635  121842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:46:46.525690  121842 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:46:46.525774  121842 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:46:46.525878  121842 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:46:46.525956  121842 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:46:46.526091  121842 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 10:46:46.526155  121842 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 10:46:46.526204  121842 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.610732ms
	I0729 10:46:46.526265  121842 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 10:46:46.526328  121842 kubeadm.go:310] [api-check] The API server is healthy after 5.001764143s
	I0729 10:46:46.526432  121842 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:46:46.526592  121842 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:46:46.526684  121842 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:46:46.526881  121842 kubeadm.go:310] [mark-control-plane] Marking the node addons-693556 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:46:46.526961  121842 kubeadm.go:310] [bootstrap-token] Using token: 0g3941.1lag00wejwxxkf79
	I0729 10:46:46.528296  121842 out.go:204]   - Configuring RBAC rules ...
	I0729 10:46:46.528400  121842 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:46:46.528494  121842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:46:46.528604  121842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:46:46.528749  121842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:46:46.528905  121842 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:46:46.528992  121842 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:46:46.529083  121842 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:46:46.529118  121842 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:46:46.529168  121842 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:46:46.529177  121842 kubeadm.go:310] 
	I0729 10:46:46.529225  121842 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:46:46.529231  121842 kubeadm.go:310] 
	I0729 10:46:46.529315  121842 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:46:46.529327  121842 kubeadm.go:310] 
	I0729 10:46:46.529385  121842 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:46:46.529463  121842 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:46:46.529533  121842 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:46:46.529542  121842 kubeadm.go:310] 
	I0729 10:46:46.529608  121842 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:46:46.529618  121842 kubeadm.go:310] 
	I0729 10:46:46.529683  121842 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:46:46.529690  121842 kubeadm.go:310] 
	I0729 10:46:46.529759  121842 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:46:46.529863  121842 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:46:46.529949  121842 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:46:46.529960  121842 kubeadm.go:310] 
	I0729 10:46:46.530050  121842 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:46:46.530149  121842 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:46:46.530158  121842 kubeadm.go:310] 
	I0729 10:46:46.530281  121842 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0g3941.1lag00wejwxxkf79 \
	I0729 10:46:46.530408  121842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb \
	I0729 10:46:46.530431  121842 kubeadm.go:310] 	--control-plane 
	I0729 10:46:46.530437  121842 kubeadm.go:310] 
	I0729 10:46:46.530517  121842 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:46:46.530531  121842 kubeadm.go:310] 
	I0729 10:46:46.530650  121842 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0g3941.1lag00wejwxxkf79 \
	I0729 10:46:46.530813  121842 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb 
	I0729 10:46:46.530830  121842 cni.go:84] Creating CNI manager for ""
	I0729 10:46:46.530844  121842 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:46:46.532265  121842 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 10:46:46.533309  121842 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 10:46:46.548508  121842 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 10:46:46.570424  121842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:46:46.570493  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:46.570534  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-693556 minikube.k8s.io/updated_at=2024_07_29T10_46_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=addons-693556 minikube.k8s.io/primary=true
	I0729 10:46:46.690802  121842 ops.go:34] apiserver oom_adj: -16
	I0729 10:46:46.690865  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:47.191559  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:47.691846  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:48.191243  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:48.691701  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:49.191126  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:49.691963  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:50.191937  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:50.691789  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:51.191509  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:51.691906  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:52.191665  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:52.691631  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:53.191883  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:53.691008  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:54.191233  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:54.691739  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:55.191772  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:55.691831  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:56.191817  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:56.691279  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:57.191863  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:57.691838  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:58.191563  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:58.691940  121842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:46:58.786655  121842 kubeadm.go:1113] duration metric: took 12.216224049s to wait for elevateKubeSystemPrivileges
	I0729 10:46:58.786700  121842 kubeadm.go:394] duration metric: took 22.496785213s to StartCluster
	I0729 10:46:58.786725  121842 settings.go:142] acquiring lock: {Name:mkb2a487c2f52476061a6d736b8e75563062eb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:58.786871  121842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 10:46:58.787277  121842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:46:58.787492  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 10:46:58.787504  121842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 10:46:58.787585  121842 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 10:46:58.787719  121842 addons.go:69] Setting yakd=true in profile "addons-693556"
	I0729 10:46:58.787723  121842 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-693556"
	I0729 10:46:58.787716  121842 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-693556"
	I0729 10:46:58.787745  121842 config.go:182] Loaded profile config "addons-693556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:58.787755  121842 addons.go:234] Setting addon yakd=true in "addons-693556"
	I0729 10:46:58.787761  121842 addons.go:69] Setting gcp-auth=true in profile "addons-693556"
	I0729 10:46:58.787772  121842 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-693556"
	I0729 10:46:58.787797  121842 mustload.go:65] Loading cluster: addons-693556
	I0729 10:46:58.787802  121842 addons.go:69] Setting registry=true in profile "addons-693556"
	I0729 10:46:58.787806  121842 addons.go:69] Setting cloud-spanner=true in profile "addons-693556"
	I0729 10:46:58.787813  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.787968  121842 config.go:182] Loaded profile config "addons-693556": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 10:46:58.787806  121842 addons.go:69] Setting volcano=true in profile "addons-693556"
	I0729 10:46:58.788037  121842 addons.go:234] Setting addon volcano=true in "addons-693556"
	I0729 10:46:58.788073  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.787815  121842 addons.go:69] Setting volumesnapshots=true in profile "addons-693556"
	I0729 10:46:58.788121  121842 addons.go:234] Setting addon volumesnapshots=true in "addons-693556"
	I0729 10:46:58.788153  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.787797  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.787799  121842 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-693556"
	I0729 10:46:58.788304  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.788312  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.788340  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.788341  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.788360  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.788487  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.788499  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.788522  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.788569  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.787823  121842 addons.go:234] Setting addon cloud-spanner=true in "addons-693556"
	I0729 10:46:58.788605  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.788630  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.788692  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.788718  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.787824  121842 addons.go:69] Setting ingress-dns=true in profile "addons-693556"
	I0729 10:46:58.788877  121842 addons.go:234] Setting addon ingress-dns=true in "addons-693556"
	I0729 10:46:58.788524  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.788935  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.789010  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.789032  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.787827  121842 addons.go:234] Setting addon registry=true in "addons-693556"
	I0729 10:46:58.789222  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.787830  121842 addons.go:69] Setting ingress=true in profile "addons-693556"
	I0729 10:46:58.789325  121842 addons.go:234] Setting addon ingress=true in "addons-693556"
	I0729 10:46:58.789362  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.789606  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.789645  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.787829  121842 addons.go:69] Setting helm-tiller=true in profile "addons-693556"
	I0729 10:46:58.789790  121842 addons.go:234] Setting addon helm-tiller=true in "addons-693556"
	I0729 10:46:58.789824  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.789843  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.789873  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.787833  121842 addons.go:69] Setting storage-provisioner=true in profile "addons-693556"
	I0729 10:46:58.787837  121842 addons.go:69] Setting inspektor-gadget=true in profile "addons-693556"
	I0729 10:46:58.790150  121842 addons.go:234] Setting addon inspektor-gadget=true in "addons-693556"
	I0729 10:46:58.790165  121842 addons.go:234] Setting addon storage-provisioner=true in "addons-693556"
	I0729 10:46:58.790170  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.790180  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.790199  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.790201  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.790523  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.790548  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.790565  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.790581  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.787839  121842 addons.go:69] Setting metrics-server=true in profile "addons-693556"
	I0729 10:46:58.790618  121842 out.go:177] * Verifying Kubernetes components...
	I0729 10:46:58.790630  121842 addons.go:234] Setting addon metrics-server=true in "addons-693556"
	I0729 10:46:58.790756  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.799753  121842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:46:58.787844  121842 addons.go:69] Setting default-storageclass=true in profile "addons-693556"
	I0729 10:46:58.799987  121842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-693556"
	I0729 10:46:58.800396  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.800420  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.789273  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.800811  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.787795  121842 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-693556"
	I0729 10:46:58.801066  121842 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-693556"
	I0729 10:46:58.801509  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.801553  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.810775  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I0729 10:46:58.811612  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.812225  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.812254  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.812635  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.812919  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0729 10:46:58.813257  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.813313  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.813404  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.813837  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.813858  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.814277  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.814809  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.814844  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.815638  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I0729 10:46:58.816053  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.816464  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.816483  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.816833  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.817385  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.817415  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.825449  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.825482  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.825804  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42873
	I0729 10:46:58.826588  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.827258  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.827281  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.827714  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.827782  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0729 10:46:58.828269  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.828662  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.828708  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.828753  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.828779  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.829330  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.829869  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I0729 10:46:58.829895  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.829935  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.830290  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39913
	I0729 10:46:58.830711  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.830753  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.831201  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.831220  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.831325  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.831342  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.831540  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.831735  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.831781  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0729 10:46:58.832820  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.833855  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.833873  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.835871  121842 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-693556"
	I0729 10:46:58.835918  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.836318  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.836360  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.836564  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41073
	I0729 10:46:58.836590  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0729 10:46:58.836739  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.837122  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.837222  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.837778  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.837786  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.837799  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.837803  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.838297  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.838299  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.838529  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.838712  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.838762  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.838951  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.838996  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.845379  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.845473  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.845509  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.847238  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0729 10:46:58.847468  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.847860  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.847898  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.848437  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.849092  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.849114  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.849507  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.850077  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.850114  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.855152  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32797
	I0729 10:46:58.855866  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.856532  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.856551  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.856944  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.857622  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.857659  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.860845  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0729 10:46:58.866607  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.867294  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.867315  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.867862  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.868162  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.871636  121842 addons.go:234] Setting addon default-storageclass=true in "addons-693556"
	I0729 10:46:58.871681  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:46:58.872090  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.872129  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.872357  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I0729 10:46:58.872372  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0729 10:46:58.872877  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.873491  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.873521  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.875448  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36815
	I0729 10:46:58.875619  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0729 10:46:58.876346  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.877121  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.877141  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.877378  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.877450  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.877586  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.878207  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.878247  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.878462  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.878550  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.879225  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0729 10:46:58.879752  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.879868  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.879890  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.880240  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.880257  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.880566  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.880767  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.881491  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.881508  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.882098  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.882364  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.882708  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.884535  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.884535  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.884610  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40687
	I0729 10:46:58.884860  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.885124  121842 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 10:46:58.885159  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.885481  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.886507  121842 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 10:46:58.886525  121842 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 10:46:58.886549  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.886557  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.886570  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.886616  121842 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 10:46:58.887081  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0729 10:46:58.887447  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 10:46:58.887631  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.888261  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.888433  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.889351  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.889416  121842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 10:46:58.889896  121842 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 10:46:58.889923  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.889959  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.889737  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0729 10:46:58.889980  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.890114  121842 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 10:46:58.890225  121842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 10:46:58.890289  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.890473  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.890710  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.890948  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44777
	I0729 10:46:58.891134  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.891185  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.891575  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.891594  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.891632  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.891660  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.891675  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.891890  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.891935  121842 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 10:46:58.891954  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 10:46:58.891962  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.891974  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.892203  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.892317  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.892333  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.892364  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.892573  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.892916  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.893223  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.893379  121842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:46:58.893835  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.894482  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.895370  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.895566  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 10:46:58.895938  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.896004  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.896019  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.896219  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.896402  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.896538  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.896676  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.896930  121842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:46:58.896983  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.897141  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.897169  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.897402  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.897617  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.897828  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.897961  121842 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 10:46:58.897994  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.898216  121842 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:46:58.898241  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 10:46:58.898259  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.898327  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 10:46:58.898489  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.898330  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46439
	I0729 10:46:58.899171  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.899210  121842 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 10:46:58.899222  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 10:46:58.899237  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.899777  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.899806  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.900215  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.900440  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.900607  121842 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 10:46:58.900660  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 10:46:58.900822  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0729 10:46:58.901516  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.901821  121842 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:46:58.901838  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 10:46:58.901856  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.902556  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.903047  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 10:46:58.903166  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.903187  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.903209  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.903224  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.903294  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.903303  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.903666  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:46:58.903765  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.903781  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:46:58.903941  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.904095  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.904048  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.904867  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.904879  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:46:58.904867  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.904890  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:46:58.904904  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:46:58.904912  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:46:58.904919  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:46:58.905083  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.905388  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 10:46:58.905411  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:46:58.905424  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 10:46:58.905488  121842 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 10:46:58.905390  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:46:58.906186  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.906219  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.906373  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.906434  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.906447  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.906623  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.906699  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.906746  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.906872  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.906890  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.907191  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0729 10:46:58.907354  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.907498  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.907753  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.908220  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.908245  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.908421  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.908559  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 10:46:58.908620  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.908823  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.909981  121842 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 10:46:58.910279  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.911701  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 10:46:58.911712  121842 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0729 10:46:58.911804  121842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 10:46:58.911822  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 10:46:58.911841  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.913141  121842 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:46:58.913159  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 10:46:58.913177  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.914509  121842 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 10:46:58.915455  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0729 10:46:58.915941  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.916236  121842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 10:46:58.916253  121842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 10:46:58.916276  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.916384  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.916534  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.916548  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.916866  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0729 10:46:58.917053  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.917081  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.917090  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.917312  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.917718  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.917754  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.917973  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.918074  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.918089  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.918116  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.918305  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.918379  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.918538  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.918596  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.918693  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.918752  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.918811  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.918780  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.919022  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.919369  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.919751  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.919994  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.920039  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.921480  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0729 10:46:58.921774  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0729 10:46:58.921991  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.922312  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.922333  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.922447  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.922458  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.922691  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.923072  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.923080  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.923125  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.923539  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.923556  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.923582  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.923598  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.923756  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.924215  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.924349  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.925005  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0729 10:46:58.925532  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.926132  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.926144  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.926451  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.926501  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.926603  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.927868  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.928581  121842 out.go:177]   - Using image docker.io/busybox:stable
	I0729 10:46:58.929477  121842 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 10:46:58.931207  121842 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 10:46:58.931260  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0729 10:46:58.931274  121842 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 10:46:58.931287  121842 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 10:46:58.931301  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.931727  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.932341  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.932357  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.932547  121842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:46:58.932557  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 10:46:58.932568  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.932946  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.933494  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:46:58.933533  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:46:58.937297  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.937315  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.937342  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.937348  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.937361  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.937386  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.937403  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.937418  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.937484  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.937551  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.937635  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.937680  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.937784  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.938050  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	W0729 10:46:58.940215  121842 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36450->192.168.39.32:22: read: connection reset by peer
	I0729 10:46:58.940247  121842 retry.go:31] will retry after 204.710312ms: ssh: handshake failed: read tcp 192.168.39.1:36450->192.168.39.32:22: read: connection reset by peer
	I0729 10:46:58.943332  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42437
	I0729 10:46:58.943762  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.944279  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.944298  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.944604  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.944774  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.946592  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.947668  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0729 10:46:58.948147  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.948368  121842 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 10:46:58.948577  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.948598  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.948865  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.949037  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.949623  121842 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 10:46:58.949640  121842 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 10:46:58.949662  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.950617  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.952064  121842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:46:58.952427  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.952832  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.952856  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.953004  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.953177  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.953314  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.953382  121842 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:46:58.953394  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:46:58.953407  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.953434  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	W0729 10:46:58.955032  121842 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36458->192.168.39.32:22: read: connection reset by peer
	I0729 10:46:58.955052  121842 retry.go:31] will retry after 144.608329ms: ssh: handshake failed: read tcp 192.168.39.1:36458->192.168.39.32:22: read: connection reset by peer
	I0729 10:46:58.955947  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.956363  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.956382  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.956541  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.956731  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.956870  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.957053  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:58.957698  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0729 10:46:58.958121  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:46:58.958772  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:46:58.958787  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:46:58.959083  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:46:58.959241  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:46:58.960523  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:46:58.960715  121842 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:46:58.960728  121842 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:46:58.960742  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:46:58.962969  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.963264  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:46:58.963293  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:46:58.963397  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:46:58.963554  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:46:58.963659  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:46:58.963775  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:46:59.204981  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:46:59.236857  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:46:59.277902  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:46:59.284532  121842 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 10:46:59.284556  121842 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 10:46:59.286111  121842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:46:59.286158  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 10:46:59.300545  121842 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 10:46:59.300574  121842 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 10:46:59.339902  121842 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:46:59.339924  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 10:46:59.349157  121842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 10:46:59.349179  121842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 10:46:59.387149  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:46:59.402762  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:46:59.405675  121842 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 10:46:59.405698  121842 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 10:46:59.411905  121842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 10:46:59.411929  121842 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 10:46:59.413389  121842 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 10:46:59.413409  121842 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 10:46:59.429060  121842 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 10:46:59.429087  121842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 10:46:59.430303  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:46:59.433471  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 10:46:59.441935  121842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 10:46:59.441957  121842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 10:46:59.457027  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:46:59.611798  121842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 10:46:59.611831  121842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 10:46:59.631939  121842 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 10:46:59.631966  121842 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 10:46:59.643322  121842 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 10:46:59.643346  121842 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 10:46:59.650364  121842 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 10:46:59.650387  121842 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 10:46:59.681877  121842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 10:46:59.681901  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 10:46:59.693882  121842 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 10:46:59.693908  121842 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 10:46:59.711015  121842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 10:46:59.711046  121842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 10:46:59.728548  121842 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:46:59.728573  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 10:46:59.861783  121842 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 10:46:59.861810  121842 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 10:46:59.867767  121842 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 10:46:59.867786  121842 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 10:46:59.882038  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 10:46:59.893399  121842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 10:46:59.893426  121842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 10:46:59.906390  121842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 10:46:59.906420  121842 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 10:46:59.996482  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:47:00.065748  121842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 10:47:00.065779  121842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 10:47:00.078631  121842 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:47:00.078653  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 10:47:00.096231  121842 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 10:47:00.096256  121842 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 10:47:00.106757  121842 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:47:00.106786  121842 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 10:47:00.297925  121842 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 10:47:00.297955  121842 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 10:47:00.302895  121842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 10:47:00.302914  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 10:47:00.331298  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:47:00.333978  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:47:00.439913  121842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 10:47:00.439951  121842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 10:47:00.557187  121842 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 10:47:00.557221  121842 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 10:47:00.651864  121842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 10:47:00.651888  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 10:47:00.828299  121842 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:47:00.828324  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 10:47:00.892659  121842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 10:47:00.892686  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 10:47:01.071250  121842 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:47:01.071285  121842 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 10:47:01.134892  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:47:01.466434  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:47:04.117680  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.912649402s)
	I0729 10:47:04.117767  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.117769  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.880872987s)
	I0729 10:47:04.117787  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.117818  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.117834  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.117885  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.839943567s)
	I0729 10:47:04.117898  121842 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.831762118s)
	I0729 10:47:04.117922  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.117924  121842 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.831748657s)
	I0729 10:47:04.117934  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.117940  121842 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 10:47:04.118011  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.730836074s)
	I0729 10:47:04.118047  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.118057  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.118307  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.118335  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.118346  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.118361  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.118373  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:04.118417  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.118425  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.118439  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.118450  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.118491  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:04.118530  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.118555  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.118569  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.118584  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.118613  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:04.118656  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.118693  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.118711  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.118822  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.118757  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.118927  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.119028  121842 node_ready.go:35] waiting up to 6m0s for node "addons-693556" to be "Ready" ...
	I0729 10:47:04.119078  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.119090  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.119139  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:04.118781  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:04.119172  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.119187  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.119706  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.119722  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.176602  121842 node_ready.go:49] node "addons-693556" has status "Ready":"True"
	I0729 10:47:04.176628  121842 node_ready.go:38] duration metric: took 57.57767ms for node "addons-693556" to be "Ready" ...
	I0729 10:47:04.176636  121842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:47:04.270048  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.270074  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.270458  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.270482  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 10:47:04.270588  121842 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0729 10:47:04.299851  121842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b7rnk" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:04.312516  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:04.312539  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:04.312871  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:04.312891  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:04.633299  121842 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-693556" context rescaled to 1 replicas
	I0729 10:47:05.937388  121842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 10:47:05.937433  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:47:05.940584  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:47:05.941052  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:47:05.941081  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:47:05.941242  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:47:05.941496  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:47:05.941735  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:47:05.941952  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:47:06.107110  121842 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 10:47:06.209312  121842 addons.go:234] Setting addon gcp-auth=true in "addons-693556"
	I0729 10:47:06.209379  121842 host.go:66] Checking if "addons-693556" exists ...
	I0729 10:47:06.209727  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.209760  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.226242  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0729 10:47:06.226779  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.227256  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.227275  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.227714  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.228165  121842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 10:47:06.228190  121842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 10:47:06.243551  121842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0729 10:47:06.243958  121842 main.go:141] libmachine: () Calling .GetVersion
	I0729 10:47:06.244432  121842 main.go:141] libmachine: Using API Version  1
	I0729 10:47:06.244461  121842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 10:47:06.244803  121842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 10:47:06.245061  121842 main.go:141] libmachine: (addons-693556) Calling .GetState
	I0729 10:47:06.246528  121842 main.go:141] libmachine: (addons-693556) Calling .DriverName
	I0729 10:47:06.246783  121842 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 10:47:06.246812  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHHostname
	I0729 10:47:06.249704  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:47:06.250152  121842 main.go:141] libmachine: (addons-693556) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:e5:c9", ip: ""} in network mk-addons-693556: {Iface:virbr1 ExpiryTime:2024-07-29 11:46:18 +0000 UTC Type:0 Mac:52:54:00:c4:e5:c9 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-693556 Clientid:01:52:54:00:c4:e5:c9}
	I0729 10:47:06.250177  121842 main.go:141] libmachine: (addons-693556) DBG | domain addons-693556 has defined IP address 192.168.39.32 and MAC address 52:54:00:c4:e5:c9 in network mk-addons-693556
	I0729 10:47:06.250367  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHPort
	I0729 10:47:06.250597  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHKeyPath
	I0729 10:47:06.250782  121842 main.go:141] libmachine: (addons-693556) Calling .GetSSHUsername
	I0729 10:47:06.250959  121842 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/addons-693556/id_rsa Username:docker}
	I0729 10:47:06.327382  121842 pod_ready.go:102] pod "coredns-7db6d8ff4d-b7rnk" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:06.794636  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.391827678s)
	I0729 10:47:06.794694  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.794711  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.794716  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.3643739s)
	I0729 10:47:06.794763  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.794781  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.794825  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.361327035s)
	I0729 10:47:06.794869  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.337816845s)
	I0729 10:47:06.794870  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.794898  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.794904  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.794909  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.794985  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.912921356s)
	I0729 10:47:06.795013  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.795025  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.795148  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.795154  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.795167  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.795177  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.795184  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.795197  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.795207  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.795215  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.795222  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.795281  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.795306  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.795320  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.795328  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.795334  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.795349  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.795397  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.795423  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.795422  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.795429  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.795433  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.795439  121842 addons.go:475] Verifying addon registry=true in "addons-693556"
	I0729 10:47:06.795459  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.795492  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.795499  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.795506  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.795513  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.797199  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.797233  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.797240  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.797522  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.797553  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.797562  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.797572  121842 addons.go:475] Verifying addon ingress=true in "addons-693556"
	I0729 10:47:06.795441  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.797711  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.797779  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.797805  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.797812  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.798002  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.798019  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.798032  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.801499246s)
	I0729 10:47:06.798065  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.798080  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.798331  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.466993634s)
	I0729 10:47:06.798357  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.798377  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.798754  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.464728515s)
	W0729 10:47:06.798833  121842 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:47:06.798871  121842 retry.go:31] will retry after 296.395169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:47:06.798984  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.664047501s)
	I0729 10:47:06.799019  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.799034  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.799165  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.799190  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.799196  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.799215  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.799221  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.799275  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.799287  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.799295  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.799303  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.799496  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.799527  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.799535  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.799542  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:06.799549  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:06.799999  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.800029  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:06.800056  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.800068  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.800322  121842 out.go:177] * Verifying ingress addon...
	I0729 10:47:06.800371  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.800396  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.800606  121842 out.go:177] * Verifying registry addon...
	I0729 10:47:06.800921  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:06.800937  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:06.800951  121842 addons.go:475] Verifying addon metrics-server=true in "addons-693556"
	I0729 10:47:06.802392  121842 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-693556 service yakd-dashboard -n yakd-dashboard
	
	I0729 10:47:06.803172  121842 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 10:47:06.803483  121842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 10:47:06.819909  121842 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 10:47:06.819934  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:06.820022  121842 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 10:47:06.820040  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:07.096136  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:47:07.313530  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:07.321292  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:07.697007  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.230489905s)
	I0729 10:47:07.697062  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:07.697076  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:07.697073  121842 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.450259641s)
	I0729 10:47:07.697380  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:07.697398  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:07.697407  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:07.697415  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:07.697736  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:07.697763  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:07.697779  121842 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-693556"
	I0729 10:47:07.698546  121842 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:47:07.700346  121842 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 10:47:07.700400  121842 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 10:47:07.701982  121842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 10:47:07.702005  121842 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 10:47:07.702919  121842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 10:47:07.728896  121842 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 10:47:07.728923  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:07.812838  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:07.818852  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:07.865284  121842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 10:47:07.865312  121842 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 10:47:07.929047  121842 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:47:07.929071  121842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 10:47:07.988982  121842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:47:08.212306  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:08.316264  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:08.317559  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:08.708463  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:08.764031  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.667833972s)
	I0729 10:47:08.764096  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:08.764115  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:08.764455  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:08.764477  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:08.764487  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:08.764494  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:08.764496  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:08.764749  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:08.764765  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:08.764781  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:08.810617  121842 pod_ready.go:102] pod "coredns-7db6d8ff4d-b7rnk" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:08.811395  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:08.814067  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:09.040722  121842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.051685767s)
	I0729 10:47:09.040805  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:09.040827  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:09.041150  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:09.041171  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:09.041183  121842 main.go:141] libmachine: Making call to close driver server
	I0729 10:47:09.041192  121842 main.go:141] libmachine: (addons-693556) Calling .Close
	I0729 10:47:09.041421  121842 main.go:141] libmachine: Successfully made call to close driver server
	I0729 10:47:09.041457  121842 main.go:141] libmachine: (addons-693556) DBG | Closing plugin on server side
	I0729 10:47:09.041468  121842 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 10:47:09.043323  121842 addons.go:475] Verifying addon gcp-auth=true in "addons-693556"
	I0729 10:47:09.045547  121842 out.go:177] * Verifying gcp-auth addon...
	I0729 10:47:09.048333  121842 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 10:47:09.070704  121842 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 10:47:09.070730  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:09.213994  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:09.319198  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:09.319462  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:09.552618  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:09.708372  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:09.808153  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:09.809169  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:10.052568  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:10.210049  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:10.308447  121842 pod_ready.go:92] pod "coredns-7db6d8ff4d-b7rnk" in "kube-system" namespace has status "Ready":"True"
	I0729 10:47:10.308480  121842 pod_ready.go:81] duration metric: took 6.00858542s for pod "coredns-7db6d8ff4d-b7rnk" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.308494  121842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vwj4d" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.309454  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:10.310162  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:10.316594  121842 pod_ready.go:92] pod "coredns-7db6d8ff4d-vwj4d" in "kube-system" namespace has status "Ready":"True"
	I0729 10:47:10.316618  121842 pod_ready.go:81] duration metric: took 8.115436ms for pod "coredns-7db6d8ff4d-vwj4d" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.316631  121842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.326068  121842 pod_ready.go:92] pod "etcd-addons-693556" in "kube-system" namespace has status "Ready":"True"
	I0729 10:47:10.326100  121842 pod_ready.go:81] duration metric: took 9.460959ms for pod "etcd-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.326115  121842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.334967  121842 pod_ready.go:92] pod "kube-apiserver-addons-693556" in "kube-system" namespace has status "Ready":"True"
	I0729 10:47:10.334990  121842 pod_ready.go:81] duration metric: took 8.867387ms for pod "kube-apiserver-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.335001  121842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.362341  121842 pod_ready.go:92] pod "kube-controller-manager-addons-693556" in "kube-system" namespace has status "Ready":"True"
	I0729 10:47:10.362364  121842 pod_ready.go:81] duration metric: took 27.357214ms for pod "kube-controller-manager-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.362376  121842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qhz5" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.554620  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:10.704073  121842 pod_ready.go:92] pod "kube-proxy-6qhz5" in "kube-system" namespace has status "Ready":"True"
	I0729 10:47:10.704096  121842 pod_ready.go:81] duration metric: took 341.714851ms for pod "kube-proxy-6qhz5" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.704106  121842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:10.707814  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:10.808616  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:10.808993  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:11.052483  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:11.104377  121842 pod_ready.go:92] pod "kube-scheduler-addons-693556" in "kube-system" namespace has status "Ready":"True"
	I0729 10:47:11.104406  121842 pod_ready.go:81] duration metric: took 400.292976ms for pod "kube-scheduler-addons-693556" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:11.104419  121842 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace to be "Ready" ...
	I0729 10:47:11.208263  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:11.310118  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:11.311097  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:11.551826  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:11.707682  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:11.809528  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:11.809706  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:12.052072  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:12.208575  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:12.308448  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:12.308620  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:12.553236  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:12.708743  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:12.810795  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:12.812072  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:13.053382  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:13.111405  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:13.208945  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:13.308854  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:13.310492  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:13.552285  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:13.712944  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:13.810186  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:13.810226  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:14.053482  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:14.207847  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:14.309531  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:14.310880  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:14.552089  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:14.710644  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:14.808577  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:14.809430  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:15.052325  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:15.209679  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:15.309967  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:15.310368  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:15.551971  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:15.611904  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:15.710628  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:15.811295  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:15.811872  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:16.052226  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:16.208840  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:16.308664  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:16.308997  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:16.552423  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:16.710639  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:16.808316  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:16.809142  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:17.052128  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:17.209549  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:17.310885  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:17.311277  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:17.552241  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:17.614268  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:17.709299  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:17.947933  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:17.953700  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:18.052520  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:18.209392  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:18.308869  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:18.310615  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:18.552028  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:18.708519  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:18.808866  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:18.809691  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:19.052845  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:19.210217  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:19.307855  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:19.309160  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:19.551834  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:19.709529  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:19.809242  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:19.809566  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:20.052408  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:20.110867  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:20.208433  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:20.309182  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:20.309445  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:20.552979  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:20.707869  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:20.809089  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:20.810401  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:21.608648  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:21.610683  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:21.611354  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:21.614985  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:21.615666  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:21.708506  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:21.808674  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:21.809565  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:22.052215  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:22.111369  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:22.208202  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:22.307817  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:22.308274  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:22.988223  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:22.988463  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:22.988774  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:22.990524  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:23.052199  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:23.208259  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:23.309126  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:23.309452  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:23.552173  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:23.708147  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:23.808513  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:23.808702  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:24.052603  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:24.208929  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:24.308475  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:24.313629  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:24.551610  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:24.611408  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:24.707952  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:24.807447  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:24.808629  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:25.052392  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:25.208242  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:25.307451  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:25.308735  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:25.551786  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:25.708072  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:25.807654  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:25.820771  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:26.051859  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:26.208782  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:26.309480  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:26.314386  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:26.552449  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:26.708229  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:26.813575  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:26.826331  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:27.052799  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:27.112785  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:27.211605  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:27.307967  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:27.308616  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:27.552460  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:27.957366  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:27.959568  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:27.960793  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:28.053417  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:28.208734  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:28.310103  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:28.310330  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:28.551984  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:28.708810  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:28.809496  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:28.813695  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:29.056924  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:29.208615  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:29.308829  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:29.309607  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:29.552476  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:29.610970  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:29.708942  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:29.812474  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:47:29.813015  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:30.052321  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:30.208460  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:30.308498  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:30.308988  121842 kapi.go:107] duration metric: took 23.505481552s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 10:47:30.551834  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:30.708848  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:30.807986  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:31.052116  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:31.208751  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:31.308218  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:31.551792  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:31.709485  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:31.807391  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:32.051900  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:32.112414  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:32.208184  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:32.307250  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:32.551646  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:32.707964  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:32.807070  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:33.051785  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:33.212939  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:33.307756  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:33.552394  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:33.709275  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:33.807556  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:34.051881  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:34.207968  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:34.307211  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:34.551869  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:34.611293  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:34.710612  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:34.807843  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:35.051600  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:35.208361  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:35.308259  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:35.551837  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:35.709967  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:35.807810  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:36.450278  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:36.450626  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:36.453343  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:36.552171  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:36.708949  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:36.808720  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:37.052331  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:37.115653  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:37.208273  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:37.311241  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:37.552767  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:37.708930  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:37.808395  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:38.052240  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:38.208693  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:38.306926  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:38.551384  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:38.708790  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:38.807834  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:39.051324  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:39.208880  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:39.309150  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:39.553278  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:39.612038  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:39.709351  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:39.812721  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:40.052441  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:40.208813  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:40.308876  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:40.552864  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:40.709463  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:40.807261  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:41.052285  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:41.207969  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:41.308345  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:41.551852  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:41.708294  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:41.807840  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:42.052782  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:42.111430  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:42.208495  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:42.307494  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:42.551998  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:42.708682  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:42.808451  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:43.052154  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:43.208849  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:43.307800  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:43.552575  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:43.709208  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:43.809225  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:44.053994  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:44.210374  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:44.316929  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:44.553310  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:44.610435  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:44.709166  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:44.808692  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:45.052250  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:45.208277  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:45.308059  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:45.551930  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:45.709144  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:45.807873  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:46.051825  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:46.209562  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:46.308182  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:46.552542  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:46.611183  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:46.708271  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:46.807178  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:47.054342  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:47.208953  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:47.313785  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:47.552490  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:47.708472  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:47.809974  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:48.051803  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:48.209198  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:48.307891  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:48.552141  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:48.709402  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:48.808444  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:49.052330  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:49.110610  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:49.208220  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:49.307537  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:49.552330  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:49.709465  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:49.808428  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:50.051864  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:50.208690  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:50.308707  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:50.552921  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:50.709001  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:50.807244  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:51.055525  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:51.113043  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:51.208648  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:51.307755  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:51.552445  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:51.709464  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:51.812365  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:52.053402  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:52.209103  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:52.307573  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:52.552358  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:52.709581  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:52.807901  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:53.051891  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:53.208396  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:53.307177  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:53.552218  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:53.610709  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:53.708718  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:53.808081  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:54.052479  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:54.208705  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:54.307468  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:54.850899  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:54.853008  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:54.853538  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:55.052435  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:55.208193  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:55.307644  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:55.552223  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:55.709721  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:55.808555  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:56.055299  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:56.113819  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:56.209417  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:56.308716  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:56.558527  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:56.707563  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:56.807899  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:57.051481  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:57.209309  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:57.307794  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:57.552462  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:57.709118  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:57.809009  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:58.051950  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:58.208593  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:58.307552  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:58.552745  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:59.095071  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:59.105364  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:47:59.106465  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:59.106478  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:59.209521  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:59.312363  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:47:59.552091  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:47:59.710359  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:47:59.807593  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:00.052382  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:00.209020  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:00.308321  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:00.552407  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:00.708783  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:00.813317  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:01.051621  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:01.111021  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:01.209565  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:01.308059  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:01.552749  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:01.723635  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:01.811981  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:02.051776  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:02.209045  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:02.307778  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:02.552806  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:02.708260  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:02.807286  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:03.051940  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:03.208440  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:03.308063  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:03.551830  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:03.610050  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:03.709058  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:03.808580  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:04.052524  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:04.213675  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:04.307962  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:04.551781  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:04.712562  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:04.809638  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:05.053041  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:05.208156  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:05.307376  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:05.551850  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:05.610152  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:05.710200  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:05.814496  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:06.052655  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:06.209477  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:06.307593  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:06.552213  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:06.708184  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:06.807831  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:07.052035  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:07.208297  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:07.308038  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:07.551628  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:07.610624  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:07.708513  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:07.807787  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:08.052643  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:08.209048  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:08.308394  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:08.551982  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:08.708607  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:48:08.808678  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:09.051929  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:09.208254  121842 kapi.go:107] duration metric: took 1m1.505333842s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 10:48:09.307302  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:09.552222  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:09.813393  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:10.052652  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:10.110976  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:10.307681  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:10.553166  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:10.808752  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:11.052754  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:11.311402  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:11.552298  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:11.809939  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:12.052226  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:12.112146  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:12.309762  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:12.552559  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:12.810573  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:13.053149  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:13.307623  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:13.554745  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:13.808080  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:14.052668  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:14.121902  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:14.436575  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:14.623907  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:14.808064  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:15.051741  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:15.308197  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:15.551826  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:15.808300  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:16.051781  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:16.308326  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:16.551853  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:16.610754  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:16.808250  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:17.052438  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:17.321090  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:17.551451  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:17.806998  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:18.051618  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:18.321167  121842 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:48:18.551958  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:18.807562  121842 kapi.go:107] duration metric: took 1m12.004385463s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 10:48:19.051718  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:19.118852  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:19.554963  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:20.052517  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:20.551713  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:21.052229  121842 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:48:21.823129  121842 kapi.go:107] duration metric: took 1m12.774795834s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 10:48:21.825080  121842 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-693556 cluster.
	I0729 10:48:21.826744  121842 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 10:48:21.827940  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:21.829338  121842 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 10:48:21.830895  121842 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, cloud-spanner, helm-tiller, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 10:48:21.832425  121842 addons.go:510] duration metric: took 1m23.044841152s for enable addons: enabled=[storage-provisioner nvidia-device-plugin default-storageclass ingress-dns cloud-spanner helm-tiller inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 10:48:24.110454  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:26.610949  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:28.611475  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:31.111516  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:33.611167  121842 pod_ready.go:102] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"False"
	I0729 10:48:34.611451  121842 pod_ready.go:92] pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace has status "Ready":"True"
	I0729 10:48:34.611484  121842 pod_ready.go:81] duration metric: took 1m23.507055998s for pod "metrics-server-c59844bb4-czr92" in "kube-system" namespace to be "Ready" ...
	I0729 10:48:34.611501  121842 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-89pfk" in "kube-system" namespace to be "Ready" ...
	I0729 10:48:34.616995  121842 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-89pfk" in "kube-system" namespace has status "Ready":"True"
	I0729 10:48:34.617024  121842 pod_ready.go:81] duration metric: took 5.513587ms for pod "nvidia-device-plugin-daemonset-89pfk" in "kube-system" namespace to be "Ready" ...
	I0729 10:48:34.617045  121842 pod_ready.go:38] duration metric: took 1m30.440398454s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:48:34.617071  121842 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:48:34.617111  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 10:48:34.617171  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 10:48:34.668759  121842 cri.go:89] found id: "9acc77299b49878dc0fd7ac3a1413178fe3815109117b21f377274e9bb47ff92"
	I0729 10:48:34.668788  121842 cri.go:89] found id: ""
	I0729 10:48:34.668799  121842 logs.go:276] 1 containers: [9acc77299b49878dc0fd7ac3a1413178fe3815109117b21f377274e9bb47ff92]
	I0729 10:48:34.668856  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:34.672591  121842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 10:48:34.672647  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 10:48:34.709364  121842 cri.go:89] found id: "80813591c4981a5c29fde87beeb6e8f1aa9ff3ff678ef61398b89d7db2ab6723"
	I0729 10:48:34.709388  121842 cri.go:89] found id: ""
	I0729 10:48:34.709396  121842 logs.go:276] 1 containers: [80813591c4981a5c29fde87beeb6e8f1aa9ff3ff678ef61398b89d7db2ab6723]
	I0729 10:48:34.709451  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:34.713528  121842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 10:48:34.713606  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 10:48:34.749991  121842 cri.go:89] found id: "f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a"
	I0729 10:48:34.750014  121842 cri.go:89] found id: ""
	I0729 10:48:34.750023  121842 logs.go:276] 1 containers: [f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a]
	I0729 10:48:34.750084  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:34.753939  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 10:48:34.754004  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 10:48:34.797821  121842 cri.go:89] found id: "1b6ef45e1103f83904e50f893476ebb77ea928f8090a44091d07dd5b60f23490"
	I0729 10:48:34.797850  121842 cri.go:89] found id: ""
	I0729 10:48:34.797860  121842 logs.go:276] 1 containers: [1b6ef45e1103f83904e50f893476ebb77ea928f8090a44091d07dd5b60f23490]
	I0729 10:48:34.797918  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:34.801702  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 10:48:34.801772  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 10:48:34.849392  121842 cri.go:89] found id: "9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06"
	I0729 10:48:34.849424  121842 cri.go:89] found id: ""
	I0729 10:48:34.849436  121842 logs.go:276] 1 containers: [9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06]
	I0729 10:48:34.849500  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:34.853524  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 10:48:34.853585  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 10:48:34.893272  121842 cri.go:89] found id: "a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada"
	I0729 10:48:34.893302  121842 cri.go:89] found id: ""
	I0729 10:48:34.893312  121842 logs.go:276] 1 containers: [a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada]
	I0729 10:48:34.893379  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:34.897233  121842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 10:48:34.897312  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 10:48:34.932932  121842 cri.go:89] found id: ""
	I0729 10:48:34.932976  121842 logs.go:276] 0 containers: []
	W0729 10:48:34.932986  121842 logs.go:278] No container was found matching "kindnet"
	I0729 10:48:34.932999  121842 logs.go:123] Gathering logs for kube-apiserver [9acc77299b49878dc0fd7ac3a1413178fe3815109117b21f377274e9bb47ff92] ...
	I0729 10:48:34.933019  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9acc77299b49878dc0fd7ac3a1413178fe3815109117b21f377274e9bb47ff92"
	I0729 10:48:34.977576  121842 logs.go:123] Gathering logs for coredns [f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a] ...
	I0729 10:48:34.977615  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a"
	I0729 10:48:35.017404  121842 logs.go:123] Gathering logs for kube-scheduler [1b6ef45e1103f83904e50f893476ebb77ea928f8090a44091d07dd5b60f23490] ...
	I0729 10:48:35.017437  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b6ef45e1103f83904e50f893476ebb77ea928f8090a44091d07dd5b60f23490"
	I0729 10:48:35.061035  121842 logs.go:123] Gathering logs for kube-proxy [9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06] ...
	I0729 10:48:35.061069  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06"
	I0729 10:48:35.093796  121842 logs.go:123] Gathering logs for dmesg ...
	I0729 10:48:35.093825  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 10:48:35.107698  121842 logs.go:123] Gathering logs for describe nodes ...
	I0729 10:48:35.107729  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 10:48:35.220185  121842 logs.go:123] Gathering logs for etcd [80813591c4981a5c29fde87beeb6e8f1aa9ff3ff678ef61398b89d7db2ab6723] ...
	I0729 10:48:35.220218  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80813591c4981a5c29fde87beeb6e8f1aa9ff3ff678ef61398b89d7db2ab6723"
	I0729 10:48:35.284922  121842 logs.go:123] Gathering logs for kube-controller-manager [a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada] ...
	I0729 10:48:35.284976  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada"
	I0729 10:48:35.343435  121842 logs.go:123] Gathering logs for CRI-O ...
	I0729 10:48:35.343481  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 10:48:36.318646  121842 logs.go:123] Gathering logs for container status ...
	I0729 10:48:36.318705  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 10:48:36.374921  121842 logs.go:123] Gathering logs for kubelet ...
	I0729 10:48:36.374961  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 10:48:36.425091  121842 logs.go:138] Found kubelet problem: Jul 29 10:47:02 addons-693556 kubelet[1265]: W0729 10:47:02.180532    1265 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-693556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.425257  121842 logs.go:138] Found kubelet problem: Jul 29 10:47:02 addons-693556 kubelet[1265]: E0729 10:47:02.180568    1265 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-693556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.425433  121842 logs.go:138] Found kubelet problem: Jul 29 10:47:02 addons-693556 kubelet[1265]: W0729 10:47:02.180607    1265 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.425590  121842 logs.go:138] Found kubelet problem: Jul 29 10:47:02 addons-693556 kubelet[1265]: E0729 10:47:02.180617    1265 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.430314  121842 logs.go:138] Found kubelet problem: Jul 29 10:47:04 addons-693556 kubelet[1265]: W0729 10:47:04.546859    1265 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.430529  121842 logs.go:138] Found kubelet problem: Jul 29 10:47:04 addons-693556 kubelet[1265]: E0729 10:47:04.546912    1265 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-693556' and this object
	I0729 10:48:36.459096  121842 out.go:304] Setting ErrFile to fd 2...
	I0729 10:48:36.459133  121842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 10:48:36.459196  121842 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 10:48:36.459211  121842 out.go:239]   Jul 29 10:47:02 addons-693556 kubelet[1265]: E0729 10:47:02.180568    1265 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-693556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	  Jul 29 10:47:02 addons-693556 kubelet[1265]: E0729 10:47:02.180568    1265 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-693556" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.459226  121842 out.go:239]   Jul 29 10:47:02 addons-693556 kubelet[1265]: W0729 10:47:02.180607    1265 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	  Jul 29 10:47:02 addons-693556 kubelet[1265]: W0729 10:47:02.180607    1265 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.459235  121842 out.go:239]   Jul 29 10:47:02 addons-693556 kubelet[1265]: E0729 10:47:02.180617    1265 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	  Jul 29 10:47:02 addons-693556 kubelet[1265]: E0729 10:47:02.180617    1265 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.459244  121842 out.go:239]   Jul 29 10:47:04 addons-693556 kubelet[1265]: W0729 10:47:04.546859    1265 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-693556' and this object
	  Jul 29 10:47:04 addons-693556 kubelet[1265]: W0729 10:47:04.546859    1265 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-693556' and this object
	W0729 10:48:36.459255  121842 out.go:239]   Jul 29 10:47:04 addons-693556 kubelet[1265]: E0729 10:47:04.546912    1265 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-693556' and this object
	  Jul 29 10:47:04 addons-693556 kubelet[1265]: E0729 10:47:04.546912    1265 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-693556" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-693556' and this object
	I0729 10:48:36.459262  121842 out.go:304] Setting ErrFile to fd 2...
	I0729 10:48:36.459272  121842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:48:46.460104  121842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:48:46.478179  121842 api_server.go:72] duration metric: took 1m47.69063926s to wait for apiserver process to appear ...
	I0729 10:48:46.478211  121842 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:48:46.478253  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 10:48:46.478314  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 10:48:46.518916  121842 cri.go:89] found id: "9acc77299b49878dc0fd7ac3a1413178fe3815109117b21f377274e9bb47ff92"
	I0729 10:48:46.518938  121842 cri.go:89] found id: ""
	I0729 10:48:46.518946  121842 logs.go:276] 1 containers: [9acc77299b49878dc0fd7ac3a1413178fe3815109117b21f377274e9bb47ff92]
	I0729 10:48:46.518997  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:46.522825  121842 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 10:48:46.522874  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 10:48:46.566699  121842 cri.go:89] found id: "80813591c4981a5c29fde87beeb6e8f1aa9ff3ff678ef61398b89d7db2ab6723"
	I0729 10:48:46.566730  121842 cri.go:89] found id: ""
	I0729 10:48:46.566741  121842 logs.go:276] 1 containers: [80813591c4981a5c29fde87beeb6e8f1aa9ff3ff678ef61398b89d7db2ab6723]
	I0729 10:48:46.566813  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:46.571070  121842 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 10:48:46.571151  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 10:48:46.612298  121842 cri.go:89] found id: "f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a"
	I0729 10:48:46.612331  121842 cri.go:89] found id: ""
	I0729 10:48:46.612342  121842 logs.go:276] 1 containers: [f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a]
	I0729 10:48:46.612396  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:46.616499  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 10:48:46.616565  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 10:48:46.653327  121842 cri.go:89] found id: "1b6ef45e1103f83904e50f893476ebb77ea928f8090a44091d07dd5b60f23490"
	I0729 10:48:46.653356  121842 cri.go:89] found id: ""
	I0729 10:48:46.653366  121842 logs.go:276] 1 containers: [1b6ef45e1103f83904e50f893476ebb77ea928f8090a44091d07dd5b60f23490]
	I0729 10:48:46.653425  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:46.657561  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 10:48:46.657630  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 10:48:46.694869  121842 cri.go:89] found id: "9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06"
	I0729 10:48:46.694896  121842 cri.go:89] found id: ""
	I0729 10:48:46.694905  121842 logs.go:276] 1 containers: [9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06]
	I0729 10:48:46.694967  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:46.699065  121842 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 10:48:46.699122  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 10:48:46.742372  121842 cri.go:89] found id: "a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada"
	I0729 10:48:46.742405  121842 cri.go:89] found id: ""
	I0729 10:48:46.742416  121842 logs.go:276] 1 containers: [a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada]
	I0729 10:48:46.742482  121842 ssh_runner.go:195] Run: which crictl
	I0729 10:48:46.747474  121842 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 10:48:46.747565  121842 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 10:48:46.783891  121842 cri.go:89] found id: ""
	I0729 10:48:46.783920  121842 logs.go:276] 0 containers: []
	W0729 10:48:46.783928  121842 logs.go:278] No container was found matching "kindnet"
	I0729 10:48:46.783941  121842 logs.go:123] Gathering logs for coredns [f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a] ...
	I0729 10:48:46.783961  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f893f4c01a707a9529eb29ddaa02d03522efc1512edb5918cc5a85af321d750a"
	I0729 10:48:46.820165  121842 logs.go:123] Gathering logs for kube-proxy [9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06] ...
	I0729 10:48:46.820197  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea695c1c8888819e0f6429349df7b1df84c7f03907e1d185cbce94d97af9a06"
	I0729 10:48:46.858878  121842 logs.go:123] Gathering logs for kube-controller-manager [a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada] ...
	I0729 10:48:46.858917  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4e8a57bb24f845a4d94956129d6b1249cd8a3a400b4dd90e0b9842ced188ada"
	I0729 10:48:46.925942  121842 logs.go:123] Gathering logs for CRI-O ...
	I0729 10:48:46.925981  121842 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-693556 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 node stop m02 -v=7 --alsologtostderr
E0729 11:35:08.355777  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:35:49.316596  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.472568828s)

                                                
                                                
-- stdout --
	* Stopping node "ha-691698-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:34:59.768906  139991 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:34:59.769073  139991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:59.769086  139991 out.go:304] Setting ErrFile to fd 2...
	I0729 11:34:59.769092  139991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:34:59.769328  139991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:34:59.769587  139991 mustload.go:65] Loading cluster: ha-691698
	I0729 11:34:59.769976  139991 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:34:59.769997  139991 stop.go:39] StopHost: ha-691698-m02
	I0729 11:34:59.770523  139991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:34:59.770584  139991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:34:59.786390  139991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I0729 11:34:59.786916  139991 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:34:59.787494  139991 main.go:141] libmachine: Using API Version  1
	I0729 11:34:59.787520  139991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:34:59.787909  139991 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:34:59.790252  139991 out.go:177] * Stopping node "ha-691698-m02"  ...
	I0729 11:34:59.791400  139991 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 11:34:59.791443  139991 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:34:59.791715  139991 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 11:34:59.791751  139991 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:34:59.794705  139991 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:34:59.795110  139991 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:34:59.795153  139991 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:34:59.795258  139991 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:34:59.795424  139991 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:34:59.795583  139991 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:34:59.795743  139991 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:34:59.879850  139991 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 11:34:59.932033  139991 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 11:34:59.985790  139991 main.go:141] libmachine: Stopping "ha-691698-m02"...
	I0729 11:34:59.985833  139991 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:34:59.987364  139991 main.go:141] libmachine: (ha-691698-m02) Calling .Stop
	I0729 11:34:59.991005  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 0/120
	I0729 11:35:00.993025  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 1/120
	I0729 11:35:01.994325  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 2/120
	I0729 11:35:02.995725  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 3/120
	I0729 11:35:03.997977  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 4/120
	I0729 11:35:05.000040  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 5/120
	I0729 11:35:06.001463  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 6/120
	I0729 11:35:07.003452  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 7/120
	I0729 11:35:08.005115  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 8/120
	I0729 11:35:09.006385  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 9/120
	I0729 11:35:10.008288  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 10/120
	I0729 11:35:11.009918  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 11/120
	I0729 11:35:12.011354  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 12/120
	I0729 11:35:13.012697  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 13/120
	I0729 11:35:14.014983  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 14/120
	I0729 11:35:15.016418  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 15/120
	I0729 11:35:16.017914  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 16/120
	I0729 11:35:17.019410  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 17/120
	I0729 11:35:18.020870  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 18/120
	I0729 11:35:19.022289  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 19/120
	I0729 11:35:20.024807  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 20/120
	I0729 11:35:21.026697  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 21/120
	I0729 11:35:22.028099  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 22/120
	I0729 11:35:23.029557  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 23/120
	I0729 11:35:24.031745  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 24/120
	I0729 11:35:25.034040  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 25/120
	I0729 11:35:26.035642  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 26/120
	I0729 11:35:27.037095  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 27/120
	I0729 11:35:28.038487  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 28/120
	I0729 11:35:29.039930  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 29/120
	I0729 11:35:30.041896  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 30/120
	I0729 11:35:31.043563  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 31/120
	I0729 11:35:32.045027  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 32/120
	I0729 11:35:33.047562  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 33/120
	I0729 11:35:34.049088  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 34/120
	I0729 11:35:35.051050  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 35/120
	I0729 11:35:36.052530  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 36/120
	I0729 11:35:37.054233  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 37/120
	I0729 11:35:38.056212  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 38/120
	I0729 11:35:39.057653  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 39/120
	I0729 11:35:40.059510  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 40/120
	I0729 11:35:41.060906  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 41/120
	I0729 11:35:42.062290  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 42/120
	I0729 11:35:43.064438  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 43/120
	I0729 11:35:44.065788  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 44/120
	I0729 11:35:45.067297  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 45/120
	I0729 11:35:46.068553  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 46/120
	I0729 11:35:47.070002  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 47/120
	I0729 11:35:48.071430  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 48/120
	I0729 11:35:49.073029  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 49/120
	I0729 11:35:50.075075  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 50/120
	I0729 11:35:51.076394  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 51/120
	I0729 11:35:52.077799  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 52/120
	I0729 11:35:53.079638  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 53/120
	I0729 11:35:54.081060  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 54/120
	I0729 11:35:55.083102  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 55/120
	I0729 11:35:56.084662  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 56/120
	I0729 11:35:57.086068  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 57/120
	I0729 11:35:58.087668  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 58/120
	I0729 11:35:59.089411  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 59/120
	I0729 11:36:00.091631  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 60/120
	I0729 11:36:01.093393  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 61/120
	I0729 11:36:02.094896  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 62/120
	I0729 11:36:03.096229  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 63/120
	I0729 11:36:04.097549  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 64/120
	I0729 11:36:05.099229  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 65/120
	I0729 11:36:06.100666  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 66/120
	I0729 11:36:07.102058  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 67/120
	I0729 11:36:08.103270  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 68/120
	I0729 11:36:09.105525  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 69/120
	I0729 11:36:10.107724  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 70/120
	I0729 11:36:11.109103  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 71/120
	I0729 11:36:12.111422  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 72/120
	I0729 11:36:13.113003  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 73/120
	I0729 11:36:14.114561  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 74/120
	I0729 11:36:15.116707  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 75/120
	I0729 11:36:16.118207  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 76/120
	I0729 11:36:17.120360  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 77/120
	I0729 11:36:18.121875  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 78/120
	I0729 11:36:19.123196  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 79/120
	I0729 11:36:20.125333  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 80/120
	I0729 11:36:21.126886  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 81/120
	I0729 11:36:22.128193  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 82/120
	I0729 11:36:23.129918  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 83/120
	I0729 11:36:24.131282  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 84/120
	I0729 11:36:25.133217  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 85/120
	I0729 11:36:26.135765  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 86/120
	I0729 11:36:27.137581  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 87/120
	I0729 11:36:28.139144  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 88/120
	I0729 11:36:29.141069  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 89/120
	I0729 11:36:30.143120  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 90/120
	I0729 11:36:31.144581  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 91/120
	I0729 11:36:32.146383  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 92/120
	I0729 11:36:33.148053  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 93/120
	I0729 11:36:34.149790  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 94/120
	I0729 11:36:35.151523  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 95/120
	I0729 11:36:36.153204  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 96/120
	I0729 11:36:37.155409  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 97/120
	I0729 11:36:38.156958  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 98/120
	I0729 11:36:39.159613  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 99/120
	I0729 11:36:40.161818  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 100/120
	I0729 11:36:41.163246  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 101/120
	I0729 11:36:42.164623  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 102/120
	I0729 11:36:43.166112  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 103/120
	I0729 11:36:44.168048  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 104/120
	I0729 11:36:45.170336  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 105/120
	I0729 11:36:46.171670  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 106/120
	I0729 11:36:47.173448  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 107/120
	I0729 11:36:48.175495  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 108/120
	I0729 11:36:49.177300  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 109/120
	I0729 11:36:50.179554  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 110/120
	I0729 11:36:51.181038  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 111/120
	I0729 11:36:52.182642  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 112/120
	I0729 11:36:53.184216  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 113/120
	I0729 11:36:54.185694  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 114/120
	I0729 11:36:55.187597  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 115/120
	I0729 11:36:56.189432  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 116/120
	I0729 11:36:57.190894  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 117/120
	I0729 11:36:58.193219  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 118/120
	I0729 11:36:59.194725  139991 main.go:141] libmachine: (ha-691698-m02) Waiting for machine to stop 119/120
	I0729 11:37:00.195196  139991 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 11:37:00.195358  139991 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-691698 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
E0729 11:37:11.237681  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (19.166264585s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:00.243060  140418 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:00.243206  140418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:00.243217  140418 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:00.243221  140418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:00.243415  140418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:37:00.243637  140418 out.go:298] Setting JSON to false
	I0729 11:37:00.243669  140418 mustload.go:65] Loading cluster: ha-691698
	I0729 11:37:00.243799  140418 notify.go:220] Checking for updates...
	I0729 11:37:00.244145  140418 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:00.244161  140418 status.go:255] checking status of ha-691698 ...
	I0729 11:37:00.244630  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:00.244697  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:00.260424  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
	I0729 11:37:00.260977  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:00.261665  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:00.261689  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:00.262078  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:00.262289  140418 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:37:00.263898  140418 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:37:00.263921  140418 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:00.264211  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:00.264248  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:00.279287  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0729 11:37:00.279736  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:00.280352  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:00.280393  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:00.280775  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:00.280997  140418 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:37:00.283935  140418 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:00.284498  140418 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:00.284525  140418 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:00.284785  140418 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:00.285221  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:00.285265  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:00.301203  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I0729 11:37:00.301627  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:00.302194  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:00.302224  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:00.302547  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:00.302748  140418 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:37:00.302986  140418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:00.303048  140418 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:37:00.306353  140418 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:00.306841  140418 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:00.307008  140418 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:00.307053  140418 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:37:00.307241  140418 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:37:00.307412  140418 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:37:00.307579  140418 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:37:00.396422  140418 ssh_runner.go:195] Run: systemctl --version
	I0729 11:37:00.403193  140418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:00.419551  140418 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:00.419587  140418 api_server.go:166] Checking apiserver status ...
	I0729 11:37:00.419630  140418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:00.439376  140418 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:37:00.450471  140418 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:00.450542  140418 ssh_runner.go:195] Run: ls
	I0729 11:37:00.455262  140418 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:00.459539  140418 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:00.459563  140418 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:37:00.459573  140418 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:00.459595  140418 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:37:00.459877  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:00.459900  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:00.475571  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42143
	I0729 11:37:00.476070  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:00.476597  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:00.476619  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:00.476994  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:00.477201  140418 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:37:00.478934  140418 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:37:00.478952  140418 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:00.479229  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:00.479266  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:00.494862  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36313
	I0729 11:37:00.495337  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:00.495871  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:00.495892  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:00.496195  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:00.496409  140418 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:37:00.499425  140418 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:00.499923  140418 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:00.499960  140418 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:00.500062  140418 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:00.500458  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:00.500492  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:00.517272  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I0729 11:37:00.517786  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:00.518353  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:00.518374  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:00.518722  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:00.518939  140418 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:37:00.519138  140418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:00.519161  140418 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:37:00.522206  140418 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:00.522638  140418 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:00.522664  140418 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:00.522932  140418 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:37:00.523210  140418 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:37:00.523388  140418 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:37:00.523600  140418 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	W0729 11:37:19.009357  140418 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:19.009561  140418 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E0729 11:37:19.009585  140418 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:19.009593  140418 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:37:19.009614  140418 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:19.009622  140418 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:37:19.010042  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:19.010099  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:19.025695  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I0729 11:37:19.026243  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:19.026727  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:19.026750  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:19.027091  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:19.027268  140418 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:37:19.028930  140418 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:37:19.028947  140418 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:19.029255  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:19.029291  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:19.044765  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38155
	I0729 11:37:19.045264  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:19.045856  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:19.045889  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:19.046213  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:19.046443  140418 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:37:19.049504  140418 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:19.049992  140418 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:19.050022  140418 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:19.050246  140418 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:19.050624  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:19.050667  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:19.065957  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
	I0729 11:37:19.066379  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:19.066856  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:19.066880  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:19.067191  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:19.067393  140418 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:37:19.067579  140418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:19.067597  140418 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:37:19.070357  140418 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:19.070820  140418 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:19.070848  140418 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:19.070999  140418 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:37:19.071153  140418 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:37:19.071331  140418 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:37:19.071485  140418 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:37:19.156617  140418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:19.171699  140418 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:19.171733  140418 api_server.go:166] Checking apiserver status ...
	I0729 11:37:19.171780  140418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:19.185867  140418 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:37:19.195526  140418 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:19.195595  140418 ssh_runner.go:195] Run: ls
	I0729 11:37:19.200072  140418 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:19.204160  140418 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:19.204199  140418 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:37:19.204207  140418 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:19.204236  140418 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:37:19.204573  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:19.204607  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:19.219968  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I0729 11:37:19.220409  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:19.220910  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:19.220935  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:19.221276  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:19.221469  140418 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:37:19.223016  140418 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:37:19.223048  140418 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:19.223320  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:19.223342  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:19.239212  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0729 11:37:19.239715  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:19.240235  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:19.240260  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:19.240582  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:19.240775  140418 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:37:19.243429  140418 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:19.243865  140418 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:19.243889  140418 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:19.244068  140418 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:19.244378  140418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:19.244419  140418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:19.260073  140418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35373
	I0729 11:37:19.260605  140418 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:19.261089  140418 main.go:141] libmachine: Using API Version  1
	I0729 11:37:19.261113  140418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:19.261407  140418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:19.261602  140418 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:37:19.261779  140418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:19.261799  140418 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:37:19.264815  140418 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:19.265256  140418 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:19.265287  140418 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:19.265455  140418 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:37:19.265624  140418 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:37:19.265772  140418 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:37:19.265904  140418 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:37:19.345387  140418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:19.360699  140418 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-691698 -n ha-691698
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-691698 logs -n 25: (1.299972729s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698:/home/docker/cp-test_ha-691698-m03_ha-691698.txt                       |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698 sudo cat                                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698.txt                                 |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m02:/home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m04 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp testdata/cp-test.txt                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698:/home/docker/cp-test_ha-691698-m04_ha-691698.txt                       |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698 sudo cat                                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698.txt                                 |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m02:/home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03:/home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m03 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-691698 node stop m02 -v=7                                                     | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:30:19
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:30:19.109800  135944 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:30:19.109894  135944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:30:19.109901  135944 out.go:304] Setting ErrFile to fd 2...
	I0729 11:30:19.109905  135944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:30:19.110113  135944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:30:19.110673  135944 out.go:298] Setting JSON to false
	I0729 11:30:19.111583  135944 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4370,"bootTime":1722248249,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:30:19.111641  135944 start.go:139] virtualization: kvm guest
	I0729 11:30:19.113602  135944 out.go:177] * [ha-691698] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:30:19.114844  135944 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 11:30:19.114889  135944 notify.go:220] Checking for updates...
	I0729 11:30:19.117179  135944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:30:19.118330  135944 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:30:19.119421  135944 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:30:19.120555  135944 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:30:19.121649  135944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:30:19.122987  135944 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:30:19.159520  135944 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:30:19.160871  135944 start.go:297] selected driver: kvm2
	I0729 11:30:19.161000  135944 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:30:19.161040  135944 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:30:19.162553  135944 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:30:19.162633  135944 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:30:19.178223  135944 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:30:19.178282  135944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:30:19.178474  135944 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:30:19.178517  135944 cni.go:84] Creating CNI manager for ""
	I0729 11:30:19.178537  135944 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 11:30:19.178549  135944 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 11:30:19.178615  135944 start.go:340] cluster config:
	{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 11:30:19.178704  135944 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:30:19.180298  135944 out.go:177] * Starting "ha-691698" primary control-plane node in "ha-691698" cluster
	I0729 11:30:19.181407  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:30:19.181437  135944 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:30:19.181444  135944 cache.go:56] Caching tarball of preloaded images
	I0729 11:30:19.181516  135944 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:30:19.181530  135944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:30:19.181817  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:30:19.181839  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json: {Name:mke678dd073965d3ae53a18897ada1c5c7139621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:19.181964  135944 start.go:360] acquireMachinesLock for ha-691698: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:30:19.181991  135944 start.go:364] duration metric: took 15.311µs to acquireMachinesLock for "ha-691698"
	I0729 11:30:19.182006  135944 start.go:93] Provisioning new machine with config: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:30:19.182060  135944 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:30:19.183523  135944 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:30:19.183631  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:30:19.183663  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:30:19.199214  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I0729 11:30:19.199720  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:30:19.200218  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:30:19.200240  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:30:19.200647  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:30:19.200816  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:19.200988  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:19.201124  135944 start.go:159] libmachine.API.Create for "ha-691698" (driver="kvm2")
	I0729 11:30:19.201153  135944 client.go:168] LocalClient.Create starting
	I0729 11:30:19.201190  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 11:30:19.201222  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:30:19.201235  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:30:19.201296  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 11:30:19.201316  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:30:19.201328  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:30:19.201343  135944 main.go:141] libmachine: Running pre-create checks...
	I0729 11:30:19.201352  135944 main.go:141] libmachine: (ha-691698) Calling .PreCreateCheck
	I0729 11:30:19.201686  135944 main.go:141] libmachine: (ha-691698) Calling .GetConfigRaw
	I0729 11:30:19.202018  135944 main.go:141] libmachine: Creating machine...
	I0729 11:30:19.202033  135944 main.go:141] libmachine: (ha-691698) Calling .Create
	I0729 11:30:19.202161  135944 main.go:141] libmachine: (ha-691698) Creating KVM machine...
	I0729 11:30:19.203682  135944 main.go:141] libmachine: (ha-691698) DBG | found existing default KVM network
	I0729 11:30:19.204680  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.204521  135967 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015470}
	I0729 11:30:19.204747  135944 main.go:141] libmachine: (ha-691698) DBG | created network xml: 
	I0729 11:30:19.204766  135944 main.go:141] libmachine: (ha-691698) DBG | <network>
	I0729 11:30:19.204776  135944 main.go:141] libmachine: (ha-691698) DBG |   <name>mk-ha-691698</name>
	I0729 11:30:19.204786  135944 main.go:141] libmachine: (ha-691698) DBG |   <dns enable='no'/>
	I0729 11:30:19.204797  135944 main.go:141] libmachine: (ha-691698) DBG |   
	I0729 11:30:19.204808  135944 main.go:141] libmachine: (ha-691698) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 11:30:19.204817  135944 main.go:141] libmachine: (ha-691698) DBG |     <dhcp>
	I0729 11:30:19.204828  135944 main.go:141] libmachine: (ha-691698) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 11:30:19.204861  135944 main.go:141] libmachine: (ha-691698) DBG |     </dhcp>
	I0729 11:30:19.204885  135944 main.go:141] libmachine: (ha-691698) DBG |   </ip>
	I0729 11:30:19.204897  135944 main.go:141] libmachine: (ha-691698) DBG |   
	I0729 11:30:19.204908  135944 main.go:141] libmachine: (ha-691698) DBG | </network>
	I0729 11:30:19.204934  135944 main.go:141] libmachine: (ha-691698) DBG | 
	I0729 11:30:19.209956  135944 main.go:141] libmachine: (ha-691698) DBG | trying to create private KVM network mk-ha-691698 192.168.39.0/24...
	I0729 11:30:19.275395  135944 main.go:141] libmachine: (ha-691698) DBG | private KVM network mk-ha-691698 192.168.39.0/24 created
	I0729 11:30:19.275428  135944 main.go:141] libmachine: (ha-691698) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698 ...
	I0729 11:30:19.275444  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.275349  135967 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:30:19.275461  135944 main.go:141] libmachine: (ha-691698) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:30:19.275535  135944 main.go:141] libmachine: (ha-691698) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:30:19.538171  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.538045  135967 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa...
	I0729 11:30:19.642797  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.642625  135967 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/ha-691698.rawdisk...
	I0729 11:30:19.642870  135944 main.go:141] libmachine: (ha-691698) DBG | Writing magic tar header
	I0729 11:30:19.642889  135944 main.go:141] libmachine: (ha-691698) DBG | Writing SSH key tar header
	I0729 11:30:19.642902  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.642784  135967 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698 ...
	I0729 11:30:19.642916  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698
	I0729 11:30:19.642979  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698 (perms=drwx------)
	I0729 11:30:19.643005  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:30:19.643012  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 11:30:19.643035  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:30:19.643044  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 11:30:19.643051  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 11:30:19.643058  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 11:30:19.643067  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:30:19.643074  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:30:19.643078  135944 main.go:141] libmachine: (ha-691698) Creating domain...
	I0729 11:30:19.643087  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:30:19.643092  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:30:19.643119  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home
	I0729 11:30:19.643143  135944 main.go:141] libmachine: (ha-691698) DBG | Skipping /home - not owner
	I0729 11:30:19.644234  135944 main.go:141] libmachine: (ha-691698) define libvirt domain using xml: 
	I0729 11:30:19.644258  135944 main.go:141] libmachine: (ha-691698) <domain type='kvm'>
	I0729 11:30:19.644265  135944 main.go:141] libmachine: (ha-691698)   <name>ha-691698</name>
	I0729 11:30:19.644270  135944 main.go:141] libmachine: (ha-691698)   <memory unit='MiB'>2200</memory>
	I0729 11:30:19.644275  135944 main.go:141] libmachine: (ha-691698)   <vcpu>2</vcpu>
	I0729 11:30:19.644279  135944 main.go:141] libmachine: (ha-691698)   <features>
	I0729 11:30:19.644284  135944 main.go:141] libmachine: (ha-691698)     <acpi/>
	I0729 11:30:19.644288  135944 main.go:141] libmachine: (ha-691698)     <apic/>
	I0729 11:30:19.644292  135944 main.go:141] libmachine: (ha-691698)     <pae/>
	I0729 11:30:19.644299  135944 main.go:141] libmachine: (ha-691698)     
	I0729 11:30:19.644304  135944 main.go:141] libmachine: (ha-691698)   </features>
	I0729 11:30:19.644309  135944 main.go:141] libmachine: (ha-691698)   <cpu mode='host-passthrough'>
	I0729 11:30:19.644313  135944 main.go:141] libmachine: (ha-691698)   
	I0729 11:30:19.644321  135944 main.go:141] libmachine: (ha-691698)   </cpu>
	I0729 11:30:19.644326  135944 main.go:141] libmachine: (ha-691698)   <os>
	I0729 11:30:19.644334  135944 main.go:141] libmachine: (ha-691698)     <type>hvm</type>
	I0729 11:30:19.644362  135944 main.go:141] libmachine: (ha-691698)     <boot dev='cdrom'/>
	I0729 11:30:19.644384  135944 main.go:141] libmachine: (ha-691698)     <boot dev='hd'/>
	I0729 11:30:19.644405  135944 main.go:141] libmachine: (ha-691698)     <bootmenu enable='no'/>
	I0729 11:30:19.644415  135944 main.go:141] libmachine: (ha-691698)   </os>
	I0729 11:30:19.644424  135944 main.go:141] libmachine: (ha-691698)   <devices>
	I0729 11:30:19.644432  135944 main.go:141] libmachine: (ha-691698)     <disk type='file' device='cdrom'>
	I0729 11:30:19.644446  135944 main.go:141] libmachine: (ha-691698)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/boot2docker.iso'/>
	I0729 11:30:19.644488  135944 main.go:141] libmachine: (ha-691698)       <target dev='hdc' bus='scsi'/>
	I0729 11:30:19.644503  135944 main.go:141] libmachine: (ha-691698)       <readonly/>
	I0729 11:30:19.644513  135944 main.go:141] libmachine: (ha-691698)     </disk>
	I0729 11:30:19.644524  135944 main.go:141] libmachine: (ha-691698)     <disk type='file' device='disk'>
	I0729 11:30:19.644537  135944 main.go:141] libmachine: (ha-691698)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:30:19.644557  135944 main.go:141] libmachine: (ha-691698)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/ha-691698.rawdisk'/>
	I0729 11:30:19.644572  135944 main.go:141] libmachine: (ha-691698)       <target dev='hda' bus='virtio'/>
	I0729 11:30:19.644580  135944 main.go:141] libmachine: (ha-691698)     </disk>
	I0729 11:30:19.644586  135944 main.go:141] libmachine: (ha-691698)     <interface type='network'>
	I0729 11:30:19.644593  135944 main.go:141] libmachine: (ha-691698)       <source network='mk-ha-691698'/>
	I0729 11:30:19.644598  135944 main.go:141] libmachine: (ha-691698)       <model type='virtio'/>
	I0729 11:30:19.644605  135944 main.go:141] libmachine: (ha-691698)     </interface>
	I0729 11:30:19.644611  135944 main.go:141] libmachine: (ha-691698)     <interface type='network'>
	I0729 11:30:19.644619  135944 main.go:141] libmachine: (ha-691698)       <source network='default'/>
	I0729 11:30:19.644624  135944 main.go:141] libmachine: (ha-691698)       <model type='virtio'/>
	I0729 11:30:19.644631  135944 main.go:141] libmachine: (ha-691698)     </interface>
	I0729 11:30:19.644636  135944 main.go:141] libmachine: (ha-691698)     <serial type='pty'>
	I0729 11:30:19.644643  135944 main.go:141] libmachine: (ha-691698)       <target port='0'/>
	I0729 11:30:19.644659  135944 main.go:141] libmachine: (ha-691698)     </serial>
	I0729 11:30:19.644677  135944 main.go:141] libmachine: (ha-691698)     <console type='pty'>
	I0729 11:30:19.644686  135944 main.go:141] libmachine: (ha-691698)       <target type='serial' port='0'/>
	I0729 11:30:19.644695  135944 main.go:141] libmachine: (ha-691698)     </console>
	I0729 11:30:19.644705  135944 main.go:141] libmachine: (ha-691698)     <rng model='virtio'>
	I0729 11:30:19.644714  135944 main.go:141] libmachine: (ha-691698)       <backend model='random'>/dev/random</backend>
	I0729 11:30:19.644726  135944 main.go:141] libmachine: (ha-691698)     </rng>
	I0729 11:30:19.644732  135944 main.go:141] libmachine: (ha-691698)     
	I0729 11:30:19.644743  135944 main.go:141] libmachine: (ha-691698)     
	I0729 11:30:19.644755  135944 main.go:141] libmachine: (ha-691698)   </devices>
	I0729 11:30:19.644765  135944 main.go:141] libmachine: (ha-691698) </domain>
	I0729 11:30:19.644771  135944 main.go:141] libmachine: (ha-691698) 
	I0729 11:30:19.649272  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:4c:d4:11 in network default
	I0729 11:30:19.649774  135944 main.go:141] libmachine: (ha-691698) Ensuring networks are active...
	I0729 11:30:19.649800  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:19.650476  135944 main.go:141] libmachine: (ha-691698) Ensuring network default is active
	I0729 11:30:19.650798  135944 main.go:141] libmachine: (ha-691698) Ensuring network mk-ha-691698 is active
	I0729 11:30:19.651309  135944 main.go:141] libmachine: (ha-691698) Getting domain xml...
	I0729 11:30:19.652086  135944 main.go:141] libmachine: (ha-691698) Creating domain...
	I0729 11:30:20.834386  135944 main.go:141] libmachine: (ha-691698) Waiting to get IP...
	I0729 11:30:20.835226  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:20.835610  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:20.835647  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:20.835592  135967 retry.go:31] will retry after 205.264513ms: waiting for machine to come up
	I0729 11:30:21.042069  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:21.042506  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:21.042531  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:21.042453  135967 retry.go:31] will retry after 253.112411ms: waiting for machine to come up
	I0729 11:30:21.297002  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:21.297371  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:21.297394  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:21.297339  135967 retry.go:31] will retry after 400.644185ms: waiting for machine to come up
	I0729 11:30:21.700028  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:21.700502  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:21.700535  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:21.700475  135967 retry.go:31] will retry after 408.754818ms: waiting for machine to come up
	I0729 11:30:22.111106  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:22.111519  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:22.111543  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:22.111433  135967 retry.go:31] will retry after 617.303625ms: waiting for machine to come up
	I0729 11:30:22.730373  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:22.730885  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:22.730911  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:22.730837  135967 retry.go:31] will retry after 832.743886ms: waiting for machine to come up
	I0729 11:30:23.564805  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:23.565227  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:23.565262  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:23.565165  135967 retry.go:31] will retry after 1.027807046s: waiting for machine to come up
	I0729 11:30:24.594076  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:24.594639  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:24.594681  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:24.594459  135967 retry.go:31] will retry after 1.23332671s: waiting for machine to come up
	I0729 11:30:25.830076  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:25.830500  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:25.830958  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:25.830460  135967 retry.go:31] will retry after 1.283922101s: waiting for machine to come up
	I0729 11:30:27.115966  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:27.116244  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:27.116263  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:27.116221  135967 retry.go:31] will retry after 2.291871554s: waiting for machine to come up
	I0729 11:30:29.410192  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:29.410659  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:29.410693  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:29.410600  135967 retry.go:31] will retry after 1.85080417s: waiting for machine to come up
	I0729 11:30:31.263175  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:31.263489  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:31.263503  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:31.263463  135967 retry.go:31] will retry after 3.371378134s: waiting for machine to come up
	I0729 11:30:34.636642  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:34.637032  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:34.637054  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:34.636988  135967 retry.go:31] will retry after 2.996860971s: waiting for machine to come up
	I0729 11:30:37.637160  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:37.637558  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:37.637585  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:37.637511  135967 retry.go:31] will retry after 5.400226697s: waiting for machine to come up
	I0729 11:30:43.041917  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.042434  135944 main.go:141] libmachine: (ha-691698) Found IP for machine: 192.168.39.244
	I0729 11:30:43.042453  135944 main.go:141] libmachine: (ha-691698) Reserving static IP address...
	I0729 11:30:43.042466  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has current primary IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.042865  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find host DHCP lease matching {name: "ha-691698", mac: "52:54:00:5a:22:44", ip: "192.168.39.244"} in network mk-ha-691698
	I0729 11:30:43.117601  135944 main.go:141] libmachine: (ha-691698) DBG | Getting to WaitForSSH function...
	I0729 11:30:43.117625  135944 main.go:141] libmachine: (ha-691698) Reserved static IP address: 192.168.39.244
	I0729 11:30:43.117637  135944 main.go:141] libmachine: (ha-691698) Waiting for SSH to be available...
	I0729 11:30:43.119952  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.120289  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.120317  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.120460  135944 main.go:141] libmachine: (ha-691698) DBG | Using SSH client type: external
	I0729 11:30:43.120490  135944 main.go:141] libmachine: (ha-691698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa (-rw-------)
	I0729 11:30:43.120525  135944 main.go:141] libmachine: (ha-691698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:30:43.120538  135944 main.go:141] libmachine: (ha-691698) DBG | About to run SSH command:
	I0729 11:30:43.120554  135944 main.go:141] libmachine: (ha-691698) DBG | exit 0
	I0729 11:30:43.245070  135944 main.go:141] libmachine: (ha-691698) DBG | SSH cmd err, output: <nil>: 
	I0729 11:30:43.245285  135944 main.go:141] libmachine: (ha-691698) KVM machine creation complete!
	I0729 11:30:43.245628  135944 main.go:141] libmachine: (ha-691698) Calling .GetConfigRaw
	I0729 11:30:43.246179  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:43.246402  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:43.246563  135944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:30:43.246580  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:30:43.247698  135944 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:30:43.247719  135944 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:30:43.247724  135944 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:30:43.247732  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.249777  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.250122  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.250148  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.250294  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.250464  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.250656  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.250832  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.250996  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.251194  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.251206  135944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:30:43.356685  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:30:43.356710  135944 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:30:43.356717  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.359166  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.359574  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.359604  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.359784  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.360025  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.360200  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.360361  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.360556  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.360749  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.360763  135944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:30:43.465529  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:30:43.465602  135944 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:30:43.465612  135944 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:30:43.465620  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:43.465861  135944 buildroot.go:166] provisioning hostname "ha-691698"
	I0729 11:30:43.465889  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:43.466133  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.468694  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.469086  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.469113  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.469357  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.469551  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.469697  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.469846  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.470055  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.470236  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.470258  135944 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698 && echo "ha-691698" | sudo tee /etc/hostname
	I0729 11:30:43.590126  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698
	
	I0729 11:30:43.590156  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.592840  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.593214  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.593246  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.593438  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.593685  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.593933  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.594075  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.594227  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.594403  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.594419  135944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:30:43.709450  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:30:43.709482  135944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:30:43.709516  135944 buildroot.go:174] setting up certificates
	I0729 11:30:43.709527  135944 provision.go:84] configureAuth start
	I0729 11:30:43.709536  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:43.709856  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:43.712024  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.712317  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.712342  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.712503  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.714443  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.714747  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.714774  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.714860  135944 provision.go:143] copyHostCerts
	I0729 11:30:43.714892  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:30:43.714934  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:30:43.714947  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:30:43.715010  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:30:43.715088  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:30:43.715105  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:30:43.715112  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:30:43.715135  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:30:43.715176  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:30:43.715192  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:30:43.715198  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:30:43.715217  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:30:43.715264  135944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698 san=[127.0.0.1 192.168.39.244 ha-691698 localhost minikube]
	I0729 11:30:44.206895  135944 provision.go:177] copyRemoteCerts
	I0729 11:30:44.206978  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:30:44.207009  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.209485  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.209789  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.209819  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.209977  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.210166  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.210336  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.210482  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.295084  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:30:44.295158  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:30:44.318943  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:30:44.319024  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 11:30:44.342695  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:30:44.342759  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:30:44.366483  135944 provision.go:87] duration metric: took 656.942521ms to configureAuth
	I0729 11:30:44.366514  135944 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:30:44.366706  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:30:44.366799  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.369558  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.369883  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.369920  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.370075  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.370283  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.370468  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.370630  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.370834  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:44.371030  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:44.371054  135944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:30:44.631402  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:30:44.631432  135944 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:30:44.631442  135944 main.go:141] libmachine: (ha-691698) Calling .GetURL
	I0729 11:30:44.632733  135944 main.go:141] libmachine: (ha-691698) DBG | Using libvirt version 6000000
	I0729 11:30:44.634693  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.635028  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.635049  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.635209  135944 main.go:141] libmachine: Docker is up and running!
	I0729 11:30:44.635221  135944 main.go:141] libmachine: Reticulating splines...
	I0729 11:30:44.635228  135944 client.go:171] duration metric: took 25.434064651s to LocalClient.Create
	I0729 11:30:44.635250  135944 start.go:167] duration metric: took 25.434127501s to libmachine.API.Create "ha-691698"
	I0729 11:30:44.635263  135944 start.go:293] postStartSetup for "ha-691698" (driver="kvm2")
	I0729 11:30:44.635278  135944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:30:44.635300  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.635562  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:30:44.635589  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.637700  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.637980  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.638007  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.638116  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.638300  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.638437  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.638591  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.719027  135944 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:30:44.723021  135944 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:30:44.723041  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:30:44.723103  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:30:44.723170  135944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:30:44.723180  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:30:44.723265  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:30:44.732319  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:30:44.754767  135944 start.go:296] duration metric: took 119.486239ms for postStartSetup
	I0729 11:30:44.754833  135944 main.go:141] libmachine: (ha-691698) Calling .GetConfigRaw
	I0729 11:30:44.755410  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:44.757916  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.758278  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.758305  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.758543  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:30:44.758711  135944 start.go:128] duration metric: took 25.576642337s to createHost
	I0729 11:30:44.758734  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.761016  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.761324  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.761348  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.761514  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.761717  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.761854  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.761998  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.762155  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:44.762328  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:44.762343  135944 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:30:44.865338  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252644.846414629
	
	I0729 11:30:44.865366  135944 fix.go:216] guest clock: 1722252644.846414629
	I0729 11:30:44.865374  135944 fix.go:229] Guest: 2024-07-29 11:30:44.846414629 +0000 UTC Remote: 2024-07-29 11:30:44.758721994 +0000 UTC m=+25.684891071 (delta=87.692635ms)
	I0729 11:30:44.865394  135944 fix.go:200] guest clock delta is within tolerance: 87.692635ms
	I0729 11:30:44.865399  135944 start.go:83] releasing machines lock for "ha-691698", held for 25.683399876s
	I0729 11:30:44.865420  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.865687  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:44.867966  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.868284  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.868310  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.868444  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.868991  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.869192  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.869282  135944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:30:44.869337  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.869449  135944 ssh_runner.go:195] Run: cat /version.json
	I0729 11:30:44.869478  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.871631  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.871916  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.871938  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.872048  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.872053  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.872280  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.872356  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.872376  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.872458  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.872530  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.872597  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.872666  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.872767  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.872907  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.949388  135944 ssh_runner.go:195] Run: systemctl --version
	I0729 11:30:44.967968  135944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:30:45.123809  135944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:30:45.129586  135944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:30:45.129645  135944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:30:45.144349  135944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:30:45.144372  135944 start.go:495] detecting cgroup driver to use...
	I0729 11:30:45.144430  135944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:30:45.160322  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:30:45.172763  135944 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:30:45.172828  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:30:45.185920  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:30:45.198742  135944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:30:45.307520  135944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:30:45.442527  135944 docker.go:233] disabling docker service ...
	I0729 11:30:45.442608  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:30:45.455901  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:30:45.468348  135944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:30:45.598540  135944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:30:45.733852  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:30:45.746953  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:30:45.764765  135944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:30:45.764843  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.774761  135944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:30:45.774850  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.784805  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.794540  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.804447  135944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:30:45.814694  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.824193  135944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.840107  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.850356  135944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:30:45.859239  135944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:30:45.859316  135944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:30:45.871650  135944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:30:45.880929  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:30:46.000530  135944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:30:46.135898  135944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:30:46.135999  135944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:30:46.140584  135944 start.go:563] Will wait 60s for crictl version
	I0729 11:30:46.140650  135944 ssh_runner.go:195] Run: which crictl
	I0729 11:30:46.144122  135944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:30:46.178250  135944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:30:46.178344  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:30:46.204928  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:30:46.233564  135944 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:30:46.234884  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:46.237323  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:46.237662  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:46.237690  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:46.237879  135944 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:30:46.241677  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:30:46.253621  135944 kubeadm.go:883] updating cluster {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:30:46.253734  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:30:46.253779  135944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:30:46.285442  135944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:30:46.285512  135944 ssh_runner.go:195] Run: which lz4
	I0729 11:30:46.289139  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 11:30:46.289258  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:30:46.293147  135944 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:30:46.293189  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:30:47.573761  135944 crio.go:462] duration metric: took 1.284535323s to copy over tarball
	I0729 11:30:47.573839  135944 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:30:49.730447  135944 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.15657971s)
	I0729 11:30:49.730473  135944 crio.go:469] duration metric: took 2.156679938s to extract the tarball
	I0729 11:30:49.730481  135944 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:30:49.767686  135944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:30:49.809380  135944 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:30:49.809402  135944 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:30:49.809410  135944 kubeadm.go:934] updating node { 192.168.39.244 8443 v1.30.3 crio true true} ...
	I0729 11:30:49.809520  135944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:30:49.809607  135944 ssh_runner.go:195] Run: crio config
	I0729 11:30:49.854193  135944 cni.go:84] Creating CNI manager for ""
	I0729 11:30:49.854217  135944 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 11:30:49.854229  135944 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:30:49.854254  135944 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-691698 NodeName:ha-691698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:30:49.854416  135944 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-691698"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:30:49.854443  135944 kube-vip.go:115] generating kube-vip config ...
	I0729 11:30:49.854497  135944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:30:49.871563  135944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:30:49.871680  135944 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:30:49.871738  135944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:30:49.883595  135944 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:30:49.883669  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 11:30:49.895077  135944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 11:30:49.910842  135944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:30:49.926669  135944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 11:30:49.942257  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 11:30:49.958201  135944 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:30:49.961844  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:30:49.973747  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:30:50.105989  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:30:50.122400  135944 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.244
	I0729 11:30:50.122424  135944 certs.go:194] generating shared ca certs ...
	I0729 11:30:50.122442  135944 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.122611  135944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:30:50.122652  135944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:30:50.122659  135944 certs.go:256] generating profile certs ...
	I0729 11:30:50.122708  135944 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:30:50.122722  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt with IP's: []
	I0729 11:30:50.236541  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt ...
	I0729 11:30:50.236578  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt: {Name:mke8f3e6ec420b4c7ad08603a289200c805aa1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.236794  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key ...
	I0729 11:30:50.236815  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key: {Name:mk60a2b766263435835454110f4741b531e9c8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.236933  135944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195
	I0729 11:30:50.236950  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.254]
	I0729 11:30:50.397902  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195 ...
	I0729 11:30:50.397936  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195: {Name:mkea92b7b889a15dc340672004a73ae9e111dde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.398125  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195 ...
	I0729 11:30:50.398141  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195: {Name:mkc391ea43924be309a9f605bb37e5b311e761f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.398237  135944 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:30:50.398311  135944 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:30:50.398362  135944 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:30:50.398376  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt with IP's: []
	I0729 11:30:50.480000  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt ...
	I0729 11:30:50.480032  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt: {Name:mk49c3bb32e9caa3f7ce2caa9de725305139b3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.480215  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key ...
	I0729 11:30:50.480228  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key: {Name:mkb7840f70ca8e3ba14ae9bc295eaa388bf6d4c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.480322  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:30:50.480341  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:30:50.480353  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:30:50.480369  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:30:50.480381  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:30:50.480392  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:30:50.480403  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:30:50.480413  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:30:50.480463  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:30:50.480499  135944 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:30:50.480506  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:30:50.480525  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:30:50.480543  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:30:50.480567  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:30:50.480602  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:30:50.480629  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.480643  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.480655  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.481276  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:30:50.505675  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:30:50.528598  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:30:50.551240  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:30:50.574514  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:30:50.597618  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:30:50.620388  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:30:50.643742  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:30:50.666672  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:30:50.689570  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:30:50.713112  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:30:50.736650  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:30:50.753003  135944 ssh_runner.go:195] Run: openssl version
	I0729 11:30:50.758628  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:30:50.769691  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.774166  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.774229  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.780217  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:30:50.791006  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:30:50.801847  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.806346  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.806411  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.811983  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:30:50.822657  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:30:50.833165  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.837715  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.837767  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.843180  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:30:50.853861  135944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:30:50.857916  135944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:30:50.857981  135944 kubeadm.go:392] StartCluster: {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:30:50.858080  135944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:30:50.858140  135944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:30:50.894092  135944 cri.go:89] found id: ""
	I0729 11:30:50.894183  135944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:30:50.906886  135944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:30:50.917543  135944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:30:50.930252  135944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:30:50.930286  135944 kubeadm.go:157] found existing configuration files:
	
	I0729 11:30:50.930340  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:30:50.939612  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:30:50.939684  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:30:50.949518  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:30:50.958597  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:30:50.958674  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:30:50.968466  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:30:50.981832  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:30:50.981893  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:30:50.991645  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:30:51.000921  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:30:51.001010  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:30:51.010557  135944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:30:51.234055  135944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:31:02.205155  135944 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:31:02.205229  135944 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:31:02.205354  135944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:31:02.205494  135944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:31:02.205599  135944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:31:02.205653  135944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:31:02.207264  135944 out.go:204]   - Generating certificates and keys ...
	I0729 11:31:02.207345  135944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:31:02.207404  135944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:31:02.207472  135944 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 11:31:02.207523  135944 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 11:31:02.207575  135944 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 11:31:02.207646  135944 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 11:31:02.207721  135944 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 11:31:02.207887  135944 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-691698 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0729 11:31:02.207964  135944 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 11:31:02.208086  135944 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-691698 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0729 11:31:02.208142  135944 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 11:31:02.208194  135944 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 11:31:02.208231  135944 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 11:31:02.208277  135944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:31:02.208321  135944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:31:02.208371  135944 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:31:02.208416  135944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:31:02.208467  135944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:31:02.208523  135944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:31:02.208617  135944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:31:02.208708  135944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:31:02.211074  135944 out.go:204]   - Booting up control plane ...
	I0729 11:31:02.211166  135944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:31:02.211269  135944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:31:02.211363  135944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:31:02.211495  135944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:31:02.211608  135944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:31:02.211667  135944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:31:02.211816  135944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:31:02.211922  135944 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:31:02.212011  135944 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.346862ms
	I0729 11:31:02.212119  135944 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:31:02.212345  135944 kubeadm.go:310] [api-check] The API server is healthy after 5.946822842s
	I0729 11:31:02.212488  135944 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:31:02.212602  135944 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:31:02.212652  135944 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:31:02.212799  135944 kubeadm.go:310] [mark-control-plane] Marking the node ha-691698 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:31:02.212876  135944 kubeadm.go:310] [bootstrap-token] Using token: m6i535.jxv009nwzx1o5m73
	I0729 11:31:02.214392  135944 out.go:204]   - Configuring RBAC rules ...
	I0729 11:31:02.214544  135944 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:31:02.214665  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:31:02.214809  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:31:02.214965  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:31:02.215096  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:31:02.215219  135944 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:31:02.215346  135944 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:31:02.215410  135944 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:31:02.215480  135944 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:31:02.215489  135944 kubeadm.go:310] 
	I0729 11:31:02.215572  135944 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:31:02.215585  135944 kubeadm.go:310] 
	I0729 11:31:02.215642  135944 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:31:02.215649  135944 kubeadm.go:310] 
	I0729 11:31:02.215676  135944 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:31:02.215722  135944 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:31:02.215770  135944 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:31:02.215776  135944 kubeadm.go:310] 
	I0729 11:31:02.215816  135944 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:31:02.215821  135944 kubeadm.go:310] 
	I0729 11:31:02.215857  135944 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:31:02.215862  135944 kubeadm.go:310] 
	I0729 11:31:02.215907  135944 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:31:02.215968  135944 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:31:02.216056  135944 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:31:02.216065  135944 kubeadm.go:310] 
	I0729 11:31:02.216170  135944 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:31:02.216244  135944 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:31:02.216250  135944 kubeadm.go:310] 
	I0729 11:31:02.216313  135944 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m6i535.jxv009nwzx1o5m73 \
	I0729 11:31:02.216399  135944 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb \
	I0729 11:31:02.216424  135944 kubeadm.go:310] 	--control-plane 
	I0729 11:31:02.216430  135944 kubeadm.go:310] 
	I0729 11:31:02.216519  135944 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:31:02.216529  135944 kubeadm.go:310] 
	I0729 11:31:02.216629  135944 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m6i535.jxv009nwzx1o5m73 \
	I0729 11:31:02.216786  135944 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb 
	I0729 11:31:02.216800  135944 cni.go:84] Creating CNI manager for ""
	I0729 11:31:02.216807  135944 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 11:31:02.218300  135944 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 11:31:02.219570  135944 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 11:31:02.225110  135944 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 11:31:02.225131  135944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 11:31:02.247312  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 11:31:02.584841  135944 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:31:02.585048  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:02.585050  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-691698 minikube.k8s.io/updated_at=2024_07_29T11_31_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=ha-691698 minikube.k8s.io/primary=true
	I0729 11:31:02.614359  135944 ops.go:34] apiserver oom_adj: -16
	I0729 11:31:02.740749  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:03.241732  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:03.741794  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:04.241836  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:04.740889  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:05.241753  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:05.741367  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:06.240788  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:06.741339  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:07.241145  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:07.741621  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:08.241500  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:08.741511  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:09.240825  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:09.740880  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:10.240772  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:10.741587  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:11.240828  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:11.741204  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:12.241449  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:12.741830  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:13.240845  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:13.740842  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:14.241264  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:14.741742  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:14.822131  135944 kubeadm.go:1113] duration metric: took 12.237156383s to wait for elevateKubeSystemPrivileges
	I0729 11:31:14.822176  135944 kubeadm.go:394] duration metric: took 23.964200026s to StartCluster
	I0729 11:31:14.822211  135944 settings.go:142] acquiring lock: {Name:mkb2a487c2f52476061a6d736b8e75563062eb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:14.822372  135944 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:31:14.823354  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:14.823640  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 11:31:14.823645  135944 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:31:14.823673  135944 start.go:241] waiting for startup goroutines ...
	I0729 11:31:14.823684  135944 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:31:14.823757  135944 addons.go:69] Setting storage-provisioner=true in profile "ha-691698"
	I0729 11:31:14.823774  135944 addons.go:69] Setting default-storageclass=true in profile "ha-691698"
	I0729 11:31:14.823804  135944 addons.go:234] Setting addon storage-provisioner=true in "ha-691698"
	I0729 11:31:14.823826  135944 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-691698"
	I0729 11:31:14.823843  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:14.823849  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:14.824176  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.824204  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.824253  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.824289  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.839549  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0729 11:31:14.839574  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0729 11:31:14.840035  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.840066  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.840574  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.840600  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.840711  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.840746  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.841012  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.841145  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.841333  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:14.841564  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.841605  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.843849  135944 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:31:14.844190  135944 kapi.go:59] client config for ha-691698: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 11:31:14.844798  135944 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 11:31:14.845096  135944 addons.go:234] Setting addon default-storageclass=true in "ha-691698"
	I0729 11:31:14.845145  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:14.845522  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.845571  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.857812  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0729 11:31:14.858355  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.858867  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.858888  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.859215  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.859416  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:14.861205  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:14.861859  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0729 11:31:14.862316  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.862820  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.862839  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.863202  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.863737  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.863762  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.864062  135944 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:31:14.865602  135944 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:31:14.865625  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:31:14.865645  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:14.868672  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.869150  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:14.869179  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.869478  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:14.869675  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:14.869840  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:14.869982  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:14.879437  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0729 11:31:14.879925  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.880438  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.880465  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.880794  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.881008  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:14.882673  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:14.882889  135944 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:31:14.882906  135944 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:31:14.882921  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:14.886201  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.886830  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:14.886851  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.887028  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:14.887231  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:14.887384  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:14.887515  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:14.988741  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 11:31:15.030626  135944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:31:15.031691  135944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:31:15.427595  135944 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 11:31:15.736784  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.736800  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.736819  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.736809  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.737163  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737181  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737191  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.737203  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.737212  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737223  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737232  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.737239  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.737496  135944 main.go:141] libmachine: (ha-691698) DBG | Closing plugin on server side
	I0729 11:31:15.737500  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737514  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737521  135944 main.go:141] libmachine: (ha-691698) DBG | Closing plugin on server side
	I0729 11:31:15.737548  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737557  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737628  135944 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 11:31:15.737642  135944 round_trippers.go:469] Request Headers:
	I0729 11:31:15.737653  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:31:15.737658  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:31:15.745808  135944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 11:31:15.746362  135944 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 11:31:15.746379  135944 round_trippers.go:469] Request Headers:
	I0729 11:31:15.746389  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:31:15.746396  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:31:15.746399  135944 round_trippers.go:473]     Content-Type: application/json
	I0729 11:31:15.749136  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:31:15.749347  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.749363  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.749662  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.749688  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.751405  135944 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 11:31:15.752479  135944 addons.go:510] duration metric: took 928.790379ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 11:31:15.752531  135944 start.go:246] waiting for cluster config update ...
	I0729 11:31:15.752547  135944 start.go:255] writing updated cluster config ...
	I0729 11:31:15.754105  135944 out.go:177] 
	I0729 11:31:15.755518  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:15.755612  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:31:15.757155  135944 out.go:177] * Starting "ha-691698-m02" control-plane node in "ha-691698" cluster
	I0729 11:31:15.758504  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:31:15.758532  135944 cache.go:56] Caching tarball of preloaded images
	I0729 11:31:15.758627  135944 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:31:15.758638  135944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:31:15.758711  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:31:15.758888  135944 start.go:360] acquireMachinesLock for ha-691698-m02: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:31:15.758928  135944 start.go:364] duration metric: took 21.733µs to acquireMachinesLock for "ha-691698-m02"
	I0729 11:31:15.758945  135944 start.go:93] Provisioning new machine with config: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:31:15.759010  135944 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 11:31:15.760628  135944 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:31:15.760723  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:15.760748  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:15.775902  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0729 11:31:15.776462  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:15.777074  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:15.777098  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:15.777420  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:15.777662  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:15.777816  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:15.778019  135944 start.go:159] libmachine.API.Create for "ha-691698" (driver="kvm2")
	I0729 11:31:15.778049  135944 client.go:168] LocalClient.Create starting
	I0729 11:31:15.778088  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 11:31:15.778137  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:31:15.778160  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:31:15.778233  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 11:31:15.778262  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:31:15.778285  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:31:15.778310  135944 main.go:141] libmachine: Running pre-create checks...
	I0729 11:31:15.778321  135944 main.go:141] libmachine: (ha-691698-m02) Calling .PreCreateCheck
	I0729 11:31:15.778519  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetConfigRaw
	I0729 11:31:15.778989  135944 main.go:141] libmachine: Creating machine...
	I0729 11:31:15.779008  135944 main.go:141] libmachine: (ha-691698-m02) Calling .Create
	I0729 11:31:15.779154  135944 main.go:141] libmachine: (ha-691698-m02) Creating KVM machine...
	I0729 11:31:15.780378  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found existing default KVM network
	I0729 11:31:15.780497  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found existing private KVM network mk-ha-691698
	I0729 11:31:15.780661  135944 main.go:141] libmachine: (ha-691698-m02) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02 ...
	I0729 11:31:15.780692  135944 main.go:141] libmachine: (ha-691698-m02) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:31:15.780757  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:15.780649  136343 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:31:15.780870  135944 main.go:141] libmachine: (ha-691698-m02) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:31:16.039880  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:16.039726  136343 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa...
	I0729 11:31:16.284500  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:16.284340  136343 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/ha-691698-m02.rawdisk...
	I0729 11:31:16.284534  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Writing magic tar header
	I0729 11:31:16.284550  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Writing SSH key tar header
	I0729 11:31:16.284562  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:16.284452  136343 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02 ...
	I0729 11:31:16.284607  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02
	I0729 11:31:16.284647  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 11:31:16.284669  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:31:16.284693  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02 (perms=drwx------)
	I0729 11:31:16.284708  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:31:16.284714  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 11:31:16.284725  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 11:31:16.284737  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:31:16.284756  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 11:31:16.284772  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:31:16.284779  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:31:16.284798  135944 main.go:141] libmachine: (ha-691698-m02) Creating domain...
	I0729 11:31:16.284816  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:31:16.284832  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home
	I0729 11:31:16.284840  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Skipping /home - not owner
	I0729 11:31:16.286006  135944 main.go:141] libmachine: (ha-691698-m02) define libvirt domain using xml: 
	I0729 11:31:16.286031  135944 main.go:141] libmachine: (ha-691698-m02) <domain type='kvm'>
	I0729 11:31:16.286048  135944 main.go:141] libmachine: (ha-691698-m02)   <name>ha-691698-m02</name>
	I0729 11:31:16.286056  135944 main.go:141] libmachine: (ha-691698-m02)   <memory unit='MiB'>2200</memory>
	I0729 11:31:16.286068  135944 main.go:141] libmachine: (ha-691698-m02)   <vcpu>2</vcpu>
	I0729 11:31:16.286074  135944 main.go:141] libmachine: (ha-691698-m02)   <features>
	I0729 11:31:16.286084  135944 main.go:141] libmachine: (ha-691698-m02)     <acpi/>
	I0729 11:31:16.286090  135944 main.go:141] libmachine: (ha-691698-m02)     <apic/>
	I0729 11:31:16.286101  135944 main.go:141] libmachine: (ha-691698-m02)     <pae/>
	I0729 11:31:16.286114  135944 main.go:141] libmachine: (ha-691698-m02)     
	I0729 11:31:16.286148  135944 main.go:141] libmachine: (ha-691698-m02)   </features>
	I0729 11:31:16.286174  135944 main.go:141] libmachine: (ha-691698-m02)   <cpu mode='host-passthrough'>
	I0729 11:31:16.286187  135944 main.go:141] libmachine: (ha-691698-m02)   
	I0729 11:31:16.286198  135944 main.go:141] libmachine: (ha-691698-m02)   </cpu>
	I0729 11:31:16.286208  135944 main.go:141] libmachine: (ha-691698-m02)   <os>
	I0729 11:31:16.286218  135944 main.go:141] libmachine: (ha-691698-m02)     <type>hvm</type>
	I0729 11:31:16.286228  135944 main.go:141] libmachine: (ha-691698-m02)     <boot dev='cdrom'/>
	I0729 11:31:16.286237  135944 main.go:141] libmachine: (ha-691698-m02)     <boot dev='hd'/>
	I0729 11:31:16.286247  135944 main.go:141] libmachine: (ha-691698-m02)     <bootmenu enable='no'/>
	I0729 11:31:16.286257  135944 main.go:141] libmachine: (ha-691698-m02)   </os>
	I0729 11:31:16.286267  135944 main.go:141] libmachine: (ha-691698-m02)   <devices>
	I0729 11:31:16.286276  135944 main.go:141] libmachine: (ha-691698-m02)     <disk type='file' device='cdrom'>
	I0729 11:31:16.286293  135944 main.go:141] libmachine: (ha-691698-m02)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/boot2docker.iso'/>
	I0729 11:31:16.286305  135944 main.go:141] libmachine: (ha-691698-m02)       <target dev='hdc' bus='scsi'/>
	I0729 11:31:16.286317  135944 main.go:141] libmachine: (ha-691698-m02)       <readonly/>
	I0729 11:31:16.286327  135944 main.go:141] libmachine: (ha-691698-m02)     </disk>
	I0729 11:31:16.286365  135944 main.go:141] libmachine: (ha-691698-m02)     <disk type='file' device='disk'>
	I0729 11:31:16.286391  135944 main.go:141] libmachine: (ha-691698-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:31:16.286408  135944 main.go:141] libmachine: (ha-691698-m02)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/ha-691698-m02.rawdisk'/>
	I0729 11:31:16.286421  135944 main.go:141] libmachine: (ha-691698-m02)       <target dev='hda' bus='virtio'/>
	I0729 11:31:16.286434  135944 main.go:141] libmachine: (ha-691698-m02)     </disk>
	I0729 11:31:16.286444  135944 main.go:141] libmachine: (ha-691698-m02)     <interface type='network'>
	I0729 11:31:16.286455  135944 main.go:141] libmachine: (ha-691698-m02)       <source network='mk-ha-691698'/>
	I0729 11:31:16.286469  135944 main.go:141] libmachine: (ha-691698-m02)       <model type='virtio'/>
	I0729 11:31:16.286483  135944 main.go:141] libmachine: (ha-691698-m02)     </interface>
	I0729 11:31:16.286491  135944 main.go:141] libmachine: (ha-691698-m02)     <interface type='network'>
	I0729 11:31:16.286518  135944 main.go:141] libmachine: (ha-691698-m02)       <source network='default'/>
	I0729 11:31:16.286529  135944 main.go:141] libmachine: (ha-691698-m02)       <model type='virtio'/>
	I0729 11:31:16.286538  135944 main.go:141] libmachine: (ha-691698-m02)     </interface>
	I0729 11:31:16.286554  135944 main.go:141] libmachine: (ha-691698-m02)     <serial type='pty'>
	I0729 11:31:16.286565  135944 main.go:141] libmachine: (ha-691698-m02)       <target port='0'/>
	I0729 11:31:16.286575  135944 main.go:141] libmachine: (ha-691698-m02)     </serial>
	I0729 11:31:16.286588  135944 main.go:141] libmachine: (ha-691698-m02)     <console type='pty'>
	I0729 11:31:16.286603  135944 main.go:141] libmachine: (ha-691698-m02)       <target type='serial' port='0'/>
	I0729 11:31:16.286616  135944 main.go:141] libmachine: (ha-691698-m02)     </console>
	I0729 11:31:16.286630  135944 main.go:141] libmachine: (ha-691698-m02)     <rng model='virtio'>
	I0729 11:31:16.286651  135944 main.go:141] libmachine: (ha-691698-m02)       <backend model='random'>/dev/random</backend>
	I0729 11:31:16.286664  135944 main.go:141] libmachine: (ha-691698-m02)     </rng>
	I0729 11:31:16.286676  135944 main.go:141] libmachine: (ha-691698-m02)     
	I0729 11:31:16.286683  135944 main.go:141] libmachine: (ha-691698-m02)     
	I0729 11:31:16.286689  135944 main.go:141] libmachine: (ha-691698-m02)   </devices>
	I0729 11:31:16.286697  135944 main.go:141] libmachine: (ha-691698-m02) </domain>
	I0729 11:31:16.286702  135944 main.go:141] libmachine: (ha-691698-m02) 
	I0729 11:31:16.293362  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:f4:3e in network default
	I0729 11:31:16.293916  135944 main.go:141] libmachine: (ha-691698-m02) Ensuring networks are active...
	I0729 11:31:16.293953  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:16.294649  135944 main.go:141] libmachine: (ha-691698-m02) Ensuring network default is active
	I0729 11:31:16.294929  135944 main.go:141] libmachine: (ha-691698-m02) Ensuring network mk-ha-691698 is active
	I0729 11:31:16.295288  135944 main.go:141] libmachine: (ha-691698-m02) Getting domain xml...
	I0729 11:31:16.296007  135944 main.go:141] libmachine: (ha-691698-m02) Creating domain...
	I0729 11:31:17.580708  135944 main.go:141] libmachine: (ha-691698-m02) Waiting to get IP...
	I0729 11:31:17.581590  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:17.582071  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:17.582147  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:17.582060  136343 retry.go:31] will retry after 191.312407ms: waiting for machine to come up
	I0729 11:31:17.775740  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:17.776302  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:17.776332  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:17.776257  136343 retry.go:31] will retry after 262.18085ms: waiting for machine to come up
	I0729 11:31:18.039882  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:18.040360  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:18.040387  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:18.040320  136343 retry.go:31] will retry after 395.238801ms: waiting for machine to come up
	I0729 11:31:18.436806  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:18.437275  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:18.437312  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:18.437230  136343 retry.go:31] will retry after 467.322595ms: waiting for machine to come up
	I0729 11:31:18.905902  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:18.906302  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:18.906331  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:18.906255  136343 retry.go:31] will retry after 576.65986ms: waiting for machine to come up
	I0729 11:31:19.485198  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:19.485593  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:19.485622  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:19.485551  136343 retry.go:31] will retry after 792.662051ms: waiting for machine to come up
	I0729 11:31:20.279605  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:20.280004  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:20.280034  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:20.279951  136343 retry.go:31] will retry after 866.125195ms: waiting for machine to come up
	I0729 11:31:21.147263  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:21.147675  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:21.147699  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:21.147600  136343 retry.go:31] will retry after 1.459748931s: waiting for machine to come up
	I0729 11:31:22.609018  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:22.609433  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:22.609462  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:22.609386  136343 retry.go:31] will retry after 1.125830798s: waiting for machine to come up
	I0729 11:31:23.736689  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:23.737103  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:23.737123  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:23.737058  136343 retry.go:31] will retry after 1.852479279s: waiting for machine to come up
	I0729 11:31:25.591695  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:25.592063  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:25.592096  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:25.591997  136343 retry.go:31] will retry after 2.458375742s: waiting for machine to come up
	I0729 11:31:28.053015  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:28.053440  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:28.053465  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:28.053381  136343 retry.go:31] will retry after 3.563552308s: waiting for machine to come up
	I0729 11:31:31.618061  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:31.618375  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:31.618408  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:31.618358  136343 retry.go:31] will retry after 3.854966211s: waiting for machine to come up
	I0729 11:31:35.477501  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.478142  135944 main.go:141] libmachine: (ha-691698-m02) Found IP for machine: 192.168.39.5
	I0729 11:31:35.478165  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has current primary IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.478173  135944 main.go:141] libmachine: (ha-691698-m02) Reserving static IP address...
	I0729 11:31:35.478628  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find host DHCP lease matching {name: "ha-691698-m02", mac: "52:54:00:d9:b5:f9", ip: "192.168.39.5"} in network mk-ha-691698
	I0729 11:31:35.557297  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Getting to WaitForSSH function...
	I0729 11:31:35.557325  135944 main.go:141] libmachine: (ha-691698-m02) Reserved static IP address: 192.168.39.5
	I0729 11:31:35.557340  135944 main.go:141] libmachine: (ha-691698-m02) Waiting for SSH to be available...
	I0729 11:31:35.560072  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.560373  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.560404  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.560575  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Using SSH client type: external
	I0729 11:31:35.560604  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa (-rw-------)
	I0729 11:31:35.560631  135944 main.go:141] libmachine: (ha-691698-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:31:35.560649  135944 main.go:141] libmachine: (ha-691698-m02) DBG | About to run SSH command:
	I0729 11:31:35.560675  135944 main.go:141] libmachine: (ha-691698-m02) DBG | exit 0
	I0729 11:31:35.681001  135944 main.go:141] libmachine: (ha-691698-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 11:31:35.681285  135944 main.go:141] libmachine: (ha-691698-m02) KVM machine creation complete!
	I0729 11:31:35.681579  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetConfigRaw
	I0729 11:31:35.682171  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:35.682336  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:35.682514  135944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:31:35.682529  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:31:35.683728  135944 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:31:35.683746  135944 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:31:35.683755  135944 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:31:35.683763  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:35.685972  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.686383  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.686416  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.686596  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:35.686813  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.687018  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.687198  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:35.687403  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:35.687625  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:35.687637  135944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:31:35.788272  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:31:35.788298  135944 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:31:35.788308  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:35.791487  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.791828  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.791858  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.792005  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:35.792238  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.792397  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.792509  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:35.792681  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:35.792852  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:35.792862  135944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:31:35.893736  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:31:35.893890  135944 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:31:35.893906  135944 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:31:35.893919  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:35.894240  135944 buildroot.go:166] provisioning hostname "ha-691698-m02"
	I0729 11:31:35.894272  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:35.894471  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:35.897214  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.897570  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.897592  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.897759  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:35.897946  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.898118  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.898265  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:35.898409  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:35.898622  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:35.898640  135944 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698-m02 && echo "ha-691698-m02" | sudo tee /etc/hostname
	I0729 11:31:36.010748  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698-m02
	
	I0729 11:31:36.010780  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.013698  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.014125  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.014152  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.014349  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.014517  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.014666  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.014784  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.014939  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:36.015109  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:36.015125  135944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:31:36.122113  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:31:36.122143  135944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:31:36.122158  135944 buildroot.go:174] setting up certificates
	I0729 11:31:36.122166  135944 provision.go:84] configureAuth start
	I0729 11:31:36.122175  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:36.122491  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:36.125054  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.125439  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.125478  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.125648  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.128887  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.129341  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.129374  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.129535  135944 provision.go:143] copyHostCerts
	I0729 11:31:36.129583  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:31:36.129629  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:31:36.129650  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:31:36.129737  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:31:36.129829  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:31:36.129854  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:31:36.129865  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:31:36.129902  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:31:36.129960  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:31:36.129983  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:31:36.129991  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:31:36.130022  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:31:36.130087  135944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698-m02 san=[127.0.0.1 192.168.39.5 ha-691698-m02 localhost minikube]
	I0729 11:31:36.194045  135944 provision.go:177] copyRemoteCerts
	I0729 11:31:36.194107  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:31:36.194134  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.196817  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.197150  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.197186  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.197398  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.197611  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.197785  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.197925  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:36.274662  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:31:36.274750  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 11:31:36.299147  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:31:36.299218  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:31:36.326189  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:31:36.326261  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:31:36.350451  135944 provision.go:87] duration metric: took 228.271408ms to configureAuth
	I0729 11:31:36.350484  135944 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:31:36.350653  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:36.350747  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.353558  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.353954  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.353983  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.354146  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.354377  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.354595  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.354759  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.354918  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:36.355102  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:36.355121  135944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:31:36.606394  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:31:36.606422  135944 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:31:36.606431  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetURL
	I0729 11:31:36.607804  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Using libvirt version 6000000
	I0729 11:31:36.610317  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.610731  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.610759  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.610980  135944 main.go:141] libmachine: Docker is up and running!
	I0729 11:31:36.610997  135944 main.go:141] libmachine: Reticulating splines...
	I0729 11:31:36.611006  135944 client.go:171] duration metric: took 20.832947089s to LocalClient.Create
	I0729 11:31:36.611040  135944 start.go:167] duration metric: took 20.833025153s to libmachine.API.Create "ha-691698"
	I0729 11:31:36.611053  135944 start.go:293] postStartSetup for "ha-691698-m02" (driver="kvm2")
	I0729 11:31:36.611065  135944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:31:36.611083  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.611356  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:31:36.611390  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.613595  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.614001  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.614027  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.614134  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.614328  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.614472  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.614605  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:36.695117  135944 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:31:36.699498  135944 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:31:36.699535  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:31:36.699607  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:31:36.699696  135944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:31:36.699709  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:31:36.699810  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:31:36.709245  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:31:36.733214  135944 start.go:296] duration metric: took 122.138653ms for postStartSetup
	I0729 11:31:36.733269  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetConfigRaw
	I0729 11:31:36.733931  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:36.736353  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.736792  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.736819  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.737081  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:31:36.737280  135944 start.go:128] duration metric: took 20.978258321s to createHost
	I0729 11:31:36.737310  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.739797  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.740128  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.740153  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.740299  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.740492  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.740678  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.740873  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.741046  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:36.741203  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:36.741220  135944 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:31:36.841808  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252696.814837720
	
	I0729 11:31:36.841844  135944 fix.go:216] guest clock: 1722252696.814837720
	I0729 11:31:36.841856  135944 fix.go:229] Guest: 2024-07-29 11:31:36.81483772 +0000 UTC Remote: 2024-07-29 11:31:36.737293619 +0000 UTC m=+77.663462696 (delta=77.544101ms)
	I0729 11:31:36.841882  135944 fix.go:200] guest clock delta is within tolerance: 77.544101ms
	I0729 11:31:36.841892  135944 start.go:83] releasing machines lock for "ha-691698-m02", held for 21.082953845s
	I0729 11:31:36.841922  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.842211  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:36.844903  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.845368  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.845393  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.847842  135944 out.go:177] * Found network options:
	I0729 11:31:36.849230  135944 out.go:177]   - NO_PROXY=192.168.39.244
	W0729 11:31:36.850468  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:31:36.850506  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.851204  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.851482  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.851590  135944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:31:36.851637  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	W0729 11:31:36.851728  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:31:36.851824  135944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:31:36.851844  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.854612  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.854714  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.855000  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.855016  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.855131  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.855149  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.855197  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.855373  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.855377  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.855528  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.855542  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.855730  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.855734  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:36.855875  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:37.093020  135944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:31:37.099209  135944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:31:37.099274  135944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:31:37.115886  135944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:31:37.115920  135944 start.go:495] detecting cgroup driver to use...
	I0729 11:31:37.115990  135944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:31:37.132295  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:31:37.147287  135944 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:31:37.147351  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:31:37.161781  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:31:37.176933  135944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:31:37.295712  135944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:31:37.452905  135944 docker.go:233] disabling docker service ...
	I0729 11:31:37.452982  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:31:37.469595  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:31:37.483195  135944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:31:37.602172  135944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:31:37.720769  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:31:37.735389  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:31:37.753521  135944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:31:37.753587  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.763991  135944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:31:37.764067  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.774506  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.784887  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.795970  135944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:31:37.807081  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.817852  135944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.836275  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.847356  135944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:31:37.857326  135944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:31:37.857388  135944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:31:37.870174  135944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:31:37.879634  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:31:37.997156  135944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:31:38.130010  135944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:31:38.130111  135944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:31:38.134562  135944 start.go:563] Will wait 60s for crictl version
	I0729 11:31:38.134632  135944 ssh_runner.go:195] Run: which crictl
	I0729 11:31:38.138170  135944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:31:38.174752  135944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:31:38.174842  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:31:38.203078  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:31:38.232064  135944 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:31:38.233512  135944 out.go:177]   - env NO_PROXY=192.168.39.244
	I0729 11:31:38.234852  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:38.237817  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:38.238244  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:38.238273  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:38.238622  135944 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:31:38.243071  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:31:38.255641  135944 mustload.go:65] Loading cluster: ha-691698
	I0729 11:31:38.255931  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:38.256285  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:38.256318  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:38.271253  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0729 11:31:38.271745  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:38.272312  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:38.272343  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:38.272709  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:38.272944  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:38.274470  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:38.274782  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:38.274810  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:38.289920  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34917
	I0729 11:31:38.290400  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:38.290915  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:38.290938  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:38.291288  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:38.291514  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:38.291693  135944 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.5
	I0729 11:31:38.291705  135944 certs.go:194] generating shared ca certs ...
	I0729 11:31:38.291720  135944 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:38.291842  135944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:31:38.291876  135944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:31:38.291882  135944 certs.go:256] generating profile certs ...
	I0729 11:31:38.291946  135944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:31:38.291973  135944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0
	I0729 11:31:38.291992  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.5 192.168.39.254]
	I0729 11:31:38.495951  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0 ...
	I0729 11:31:38.495990  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0: {Name:mk6b82ec14c3b68f14a2634e48c65b4e1a7c231d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:38.496202  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0 ...
	I0729 11:31:38.496221  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0: {Name:mk3f9d4694c2ebbbe9aa6512e9bb831c319706dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:38.496320  135944 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:31:38.496508  135944 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:31:38.496685  135944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:31:38.496706  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:31:38.496728  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:31:38.496745  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:31:38.496762  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:31:38.496778  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:31:38.496792  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:31:38.496808  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:31:38.496825  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:31:38.496888  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:31:38.496927  135944 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:31:38.496939  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:31:38.496997  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:31:38.497030  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:31:38.497060  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:31:38.497129  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:31:38.497176  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:31:38.497196  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:38.497212  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:31:38.497255  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:38.500526  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:38.501085  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:38.501117  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:38.501328  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:38.501601  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:38.501780  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:38.501940  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:38.573385  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 11:31:38.577821  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 11:31:38.588284  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 11:31:38.592433  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 11:31:38.603257  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 11:31:38.607444  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 11:31:38.618229  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 11:31:38.622201  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 11:31:38.632762  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 11:31:38.636624  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 11:31:38.647369  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 11:31:38.651661  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 11:31:38.663814  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:31:38.689251  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:31:38.713899  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:31:38.739071  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:31:38.763569  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 11:31:38.788039  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:31:38.811576  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:31:38.835147  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:31:38.859125  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:31:38.883230  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:31:38.907205  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:31:38.931538  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 11:31:38.947902  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 11:31:38.963980  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 11:31:38.981027  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 11:31:38.998734  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 11:31:39.016508  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 11:31:39.033781  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 11:31:39.051226  135944 ssh_runner.go:195] Run: openssl version
	I0729 11:31:39.057136  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:31:39.068199  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:31:39.072744  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:31:39.072820  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:31:39.078499  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:31:39.088593  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:31:39.098647  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:39.102972  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:39.103021  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:39.108554  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:31:39.118800  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:31:39.130875  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:31:39.135328  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:31:39.135384  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:31:39.141022  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:31:39.151436  135944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:31:39.155155  135944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:31:39.155218  135944 kubeadm.go:934] updating node {m02 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0729 11:31:39.155311  135944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:31:39.155340  135944 kube-vip.go:115] generating kube-vip config ...
	I0729 11:31:39.155384  135944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:31:39.169695  135944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:31:39.169778  135944 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:31:39.169850  135944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:31:39.179366  135944 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 11:31:39.179449  135944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 11:31:39.189303  135944 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 11:31:39.189319  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 11:31:39.189352  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:31:39.189314  135944 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 11:31:39.189437  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:31:39.193697  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 11:31:39.193731  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 11:31:40.054339  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:31:40.054418  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:31:40.058977  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 11:31:40.059015  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 11:31:41.023647  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:31:41.038178  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:31:41.038271  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:31:41.042570  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 11:31:41.042608  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 11:31:41.443856  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 11:31:41.454023  135944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 11:31:41.471925  135944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:31:41.489188  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 11:31:41.506561  135944 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:31:41.510602  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:31:41.523379  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:31:41.639890  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:31:41.657080  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:41.657571  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:41.657644  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:41.673101  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36583
	I0729 11:31:41.673703  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:41.674233  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:41.674262  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:41.674669  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:41.674934  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:41.675119  135944 start.go:317] joinCluster: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:31:41.675230  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 11:31:41.675249  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:41.678700  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:41.679123  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:41.679156  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:41.679335  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:41.679566  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:41.679797  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:41.679954  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:41.832817  135944 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:31:41.832865  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k6yan8.0jnfim4w1mm9t7gt --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m02 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443"
	I0729 11:32:03.840928  135944 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k6yan8.0jnfim4w1mm9t7gt --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m02 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443": (22.008036047s)
	I0729 11:32:03.840980  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 11:32:04.393347  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-691698-m02 minikube.k8s.io/updated_at=2024_07_29T11_32_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=ha-691698 minikube.k8s.io/primary=false
	I0729 11:32:04.519579  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-691698-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 11:32:04.627302  135944 start.go:319] duration metric: took 22.952179045s to joinCluster
	I0729 11:32:04.627381  135944 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:32:04.627728  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:04.629027  135944 out.go:177] * Verifying Kubernetes components...
	I0729 11:32:04.630310  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:32:04.897544  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:32:04.960565  135944 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:32:04.960924  135944 kapi.go:59] client config for ha-691698: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 11:32:04.961035  135944 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.244:8443
	I0729 11:32:04.961309  135944 node_ready.go:35] waiting up to 6m0s for node "ha-691698-m02" to be "Ready" ...
	I0729 11:32:04.961427  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:04.961439  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:04.961451  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:04.961458  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:04.976233  135944 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0729 11:32:05.462214  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:05.462236  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:05.462247  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:05.462252  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:05.469225  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:32:05.961609  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:05.961637  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:05.961648  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:05.961653  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:05.966006  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:32:06.461773  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:06.461794  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:06.461802  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:06.461806  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:06.464891  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:06.962386  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:06.962410  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:06.962422  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:06.962428  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:06.965942  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:06.966492  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:07.461874  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:07.461905  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:07.461918  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:07.461922  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:07.465647  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:07.962231  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:07.962257  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:07.962266  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:07.962269  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:07.965721  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:08.461617  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:08.461644  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:08.461657  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:08.461661  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:08.465535  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:08.961603  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:08.961624  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:08.961632  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:08.961638  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:08.964880  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:09.461592  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:09.461616  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:09.461625  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:09.461630  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:09.465258  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:09.465956  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:09.962247  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:09.962270  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:09.962280  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:09.962286  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:09.965603  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:10.461758  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:10.461785  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:10.461797  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:10.461800  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:10.465371  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:10.962146  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:10.962169  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:10.962195  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:10.962200  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:10.965463  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:11.462343  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:11.462370  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:11.462380  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:11.462383  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:11.465533  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:11.466103  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:11.962024  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:11.962048  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:11.962063  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:11.962067  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:12.003235  135944 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0729 11:32:12.462430  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:12.462453  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:12.462464  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:12.462472  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:12.465495  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:12.962237  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:12.962263  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:12.962275  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:12.962283  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:12.965500  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:13.461971  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:13.461999  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:13.462010  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:13.462017  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:13.465564  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:13.466248  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:13.961600  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:13.961623  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:13.961632  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:13.961636  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:13.964983  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:14.462191  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:14.462217  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:14.462227  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:14.462232  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:14.465492  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:14.961997  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:14.962020  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:14.962028  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:14.962033  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:14.965348  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:15.462574  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:15.462600  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:15.462612  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:15.462617  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:15.465664  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:15.962062  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:15.962089  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:15.962100  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:15.962105  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:15.965647  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:15.966182  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:16.461554  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:16.461584  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:16.461597  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:16.461602  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:16.465049  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:16.962395  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:16.962420  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:16.962427  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:16.962437  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:16.966012  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:17.461621  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:17.461652  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:17.461664  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:17.461669  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:17.464702  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:17.961593  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:17.961619  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:17.961630  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:17.961636  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:17.964979  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:18.461865  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:18.461890  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:18.461899  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:18.461902  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:18.465345  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:18.465882  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:18.962343  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:18.962369  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:18.962380  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:18.962385  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:18.965722  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:19.462421  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:19.462442  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:19.462451  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:19.462457  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:19.467885  135944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 11:32:19.961966  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:19.961994  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:19.962008  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:19.962013  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:19.965705  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:20.462240  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:20.462262  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:20.462270  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:20.462274  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:20.465514  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:20.466129  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:20.961822  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:20.961845  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:20.961853  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:20.961858  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:20.965865  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:21.461796  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:21.461822  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:21.461831  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:21.461834  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:21.465816  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:21.962363  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:21.962387  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:21.962395  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:21.962400  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:21.966068  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.462024  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:22.462045  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.462054  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.462058  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.465535  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.466076  135944 node_ready.go:49] node "ha-691698-m02" has status "Ready":"True"
	I0729 11:32:22.466096  135944 node_ready.go:38] duration metric: took 17.504767524s for node "ha-691698-m02" to be "Ready" ...
	I0729 11:32:22.466105  135944 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:32:22.466185  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:22.466191  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.466198  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.466203  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.471047  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:32:22.477901  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.477997  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p7zbj
	I0729 11:32:22.478008  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.478016  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.478020  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.482119  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:32:22.482944  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.482963  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.482973  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.482977  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.485028  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.485579  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.485602  135944 pod_ready.go:81] duration metric: took 7.674871ms for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.485616  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.485675  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r48d8
	I0729 11:32:22.485682  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.485690  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.485695  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.487932  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.488545  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.488561  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.488569  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.488574  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.490563  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:32:22.491019  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.491035  135944 pod_ready.go:81] duration metric: took 5.409217ms for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.491044  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.491090  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698
	I0729 11:32:22.491097  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.491105  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.491112  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.493261  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.493860  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.493874  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.493881  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.493884  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.495778  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:32:22.496373  135944 pod_ready.go:92] pod "etcd-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.496390  135944 pod_ready.go:81] duration metric: took 5.340632ms for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.496398  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.496438  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m02
	I0729 11:32:22.496446  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.496452  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.496456  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.498553  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.499056  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:22.499070  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.499076  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.499079  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.500984  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:32:22.501423  135944 pod_ready.go:92] pod "etcd-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.501440  135944 pod_ready.go:81] duration metric: took 5.035545ms for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.501459  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.662878  135944 request.go:629] Waited for 161.330614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:32:22.662969  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:32:22.662991  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.663010  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.663019  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.666154  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.862117  135944 request.go:629] Waited for 195.32766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.862184  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.862190  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.862198  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.862202  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.865231  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.865720  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.865739  135944 pod_ready.go:81] duration metric: took 364.269821ms for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.865751  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.062874  135944 request.go:629] Waited for 197.020122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:32:23.062949  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:32:23.062955  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.062962  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.062967  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.066551  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.262594  135944 request.go:629] Waited for 195.28852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:23.262667  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:23.262672  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.262682  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.262692  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.266285  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.266821  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:23.266840  135944 pod_ready.go:81] duration metric: took 401.080433ms for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.266850  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.463079  135944 request.go:629] Waited for 196.158228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:32:23.463139  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:32:23.463144  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.463151  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.463156  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.466869  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.662934  135944 request.go:629] Waited for 195.378415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:23.663000  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:23.663007  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.663020  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.663028  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.666276  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.666796  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:23.666814  135944 pod_ready.go:81] duration metric: took 399.956322ms for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.666831  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.862888  135944 request.go:629] Waited for 195.986941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:32:23.862976  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:32:23.862986  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.862999  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.863008  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.866406  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.062469  135944 request.go:629] Waited for 195.391025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.062557  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.062566  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.062575  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.062580  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.065813  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.066574  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:24.066592  135944 pod_ready.go:81] duration metric: took 399.755147ms for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.066605  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.262378  135944 request.go:629] Waited for 195.696313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:32:24.262454  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:32:24.262462  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.262473  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.262477  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.265878  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.462991  135944 request.go:629] Waited for 196.437934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:24.463048  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:24.463057  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.463066  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.463069  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.466620  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.467378  135944 pod_ready.go:92] pod "kube-proxy-5hn2s" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:24.467397  135944 pod_ready.go:81] duration metric: took 400.785631ms for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.467407  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.662603  135944 request.go:629] Waited for 195.10343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:32:24.662664  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:32:24.662672  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.662679  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.662683  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.666538  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.862352  135944 request.go:629] Waited for 195.153062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.862426  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.862431  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.862439  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.862444  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.865910  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.866463  135944 pod_ready.go:92] pod "kube-proxy-8p4nc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:24.866485  135944 pod_ready.go:81] duration metric: took 399.072237ms for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.866496  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.062674  135944 request.go:629] Waited for 196.089202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:32:25.062751  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:32:25.062761  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.062771  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.062777  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.066281  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.262198  135944 request.go:629] Waited for 195.303482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:25.262275  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:25.262280  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.262288  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.262292  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.265726  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.266373  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:25.266403  135944 pod_ready.go:81] duration metric: took 399.899992ms for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.266415  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.462558  135944 request.go:629] Waited for 196.062831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:32:25.462630  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:32:25.462647  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.462656  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.462662  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.465926  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.662896  135944 request.go:629] Waited for 196.397272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:25.662979  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:25.662986  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.662996  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.663008  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.666405  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.667062  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:25.667079  135944 pod_ready.go:81] duration metric: took 400.657123ms for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.667092  135944 pod_ready.go:38] duration metric: took 3.200958973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:32:25.667109  135944 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:32:25.667167  135944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:32:25.681427  135944 api_server.go:72] duration metric: took 21.05399667s to wait for apiserver process to appear ...
	I0729 11:32:25.681461  135944 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:32:25.681488  135944 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0729 11:32:25.687357  135944 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0729 11:32:25.687449  135944 round_trippers.go:463] GET https://192.168.39.244:8443/version
	I0729 11:32:25.687460  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.687470  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.687477  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.688358  135944 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 11:32:25.688469  135944 api_server.go:141] control plane version: v1.30.3
	I0729 11:32:25.688494  135944 api_server.go:131] duration metric: took 7.02376ms to wait for apiserver health ...
	I0729 11:32:25.688507  135944 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:32:25.862095  135944 request.go:629] Waited for 173.482481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:25.862163  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:25.862169  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.862177  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.862184  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.867405  135944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 11:32:25.872378  135944 system_pods.go:59] 17 kube-system pods found
	I0729 11:32:25.872417  135944 system_pods.go:61] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:32:25.872422  135944 system_pods.go:61] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:32:25.872426  135944 system_pods.go:61] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:32:25.872430  135944 system_pods.go:61] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:32:25.872433  135944 system_pods.go:61] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:32:25.872437  135944 system_pods.go:61] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:32:25.872440  135944 system_pods.go:61] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:32:25.872443  135944 system_pods.go:61] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:32:25.872446  135944 system_pods.go:61] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:32:25.872451  135944 system_pods.go:61] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:32:25.872454  135944 system_pods.go:61] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:32:25.872457  135944 system_pods.go:61] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:32:25.872460  135944 system_pods.go:61] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:32:25.872463  135944 system_pods.go:61] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:32:25.872465  135944 system_pods.go:61] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:32:25.872468  135944 system_pods.go:61] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:32:25.872472  135944 system_pods.go:61] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:32:25.872478  135944 system_pods.go:74] duration metric: took 183.963171ms to wait for pod list to return data ...
	I0729 11:32:25.872490  135944 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:32:26.062955  135944 request.go:629] Waited for 190.370313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:32:26.063013  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:32:26.063018  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:26.063026  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:26.063031  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:26.066675  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:26.066968  135944 default_sa.go:45] found service account: "default"
	I0729 11:32:26.066988  135944 default_sa.go:55] duration metric: took 194.485878ms for default service account to be created ...
	I0729 11:32:26.067001  135944 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:32:26.262473  135944 request.go:629] Waited for 195.391661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:26.262555  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:26.262561  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:26.262572  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:26.262578  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:26.267707  135944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 11:32:26.273665  135944 system_pods.go:86] 17 kube-system pods found
	I0729 11:32:26.273698  135944 system_pods.go:89] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:32:26.273706  135944 system_pods.go:89] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:32:26.273711  135944 system_pods.go:89] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:32:26.273716  135944 system_pods.go:89] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:32:26.273721  135944 system_pods.go:89] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:32:26.273724  135944 system_pods.go:89] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:32:26.273728  135944 system_pods.go:89] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:32:26.273733  135944 system_pods.go:89] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:32:26.273738  135944 system_pods.go:89] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:32:26.273742  135944 system_pods.go:89] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:32:26.273748  135944 system_pods.go:89] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:32:26.273753  135944 system_pods.go:89] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:32:26.273759  135944 system_pods.go:89] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:32:26.273765  135944 system_pods.go:89] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:32:26.273780  135944 system_pods.go:89] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:32:26.273786  135944 system_pods.go:89] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:32:26.273791  135944 system_pods.go:89] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:32:26.273799  135944 system_pods.go:126] duration metric: took 206.788322ms to wait for k8s-apps to be running ...
	I0729 11:32:26.273815  135944 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:32:26.273867  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:32:26.288740  135944 system_svc.go:56] duration metric: took 14.918303ms WaitForService to wait for kubelet
	I0729 11:32:26.288785  135944 kubeadm.go:582] duration metric: took 21.661367602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:32:26.288811  135944 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:32:26.462221  135944 request.go:629] Waited for 173.316729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes
	I0729 11:32:26.462290  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes
	I0729 11:32:26.462297  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:26.462307  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:26.462313  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:26.466058  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:26.466971  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:32:26.466996  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:32:26.467007  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:32:26.467011  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:32:26.467015  135944 node_conditions.go:105] duration metric: took 178.198814ms to run NodePressure ...
	I0729 11:32:26.467027  135944 start.go:241] waiting for startup goroutines ...
	I0729 11:32:26.467050  135944 start.go:255] writing updated cluster config ...
	I0729 11:32:26.469018  135944 out.go:177] 
	I0729 11:32:26.470517  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:26.470619  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:32:26.472356  135944 out.go:177] * Starting "ha-691698-m03" control-plane node in "ha-691698" cluster
	I0729 11:32:26.473685  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:32:26.473717  135944 cache.go:56] Caching tarball of preloaded images
	I0729 11:32:26.473840  135944 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:32:26.473852  135944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:32:26.473946  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:32:26.474117  135944 start.go:360] acquireMachinesLock for ha-691698-m03: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:32:26.474157  135944 start.go:364] duration metric: took 21.796µs to acquireMachinesLock for "ha-691698-m03"
	I0729 11:32:26.474179  135944 start.go:93] Provisioning new machine with config: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:32:26.474268  135944 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 11:32:26.475957  135944 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:32:26.476052  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:26.476087  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:26.491106  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0729 11:32:26.491597  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:26.492100  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:26.492120  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:26.492456  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:26.492681  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:26.492858  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:26.493059  135944 start.go:159] libmachine.API.Create for "ha-691698" (driver="kvm2")
	I0729 11:32:26.493093  135944 client.go:168] LocalClient.Create starting
	I0729 11:32:26.493144  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 11:32:26.493178  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:32:26.493193  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:32:26.493240  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 11:32:26.493257  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:32:26.493267  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:32:26.493282  135944 main.go:141] libmachine: Running pre-create checks...
	I0729 11:32:26.493290  135944 main.go:141] libmachine: (ha-691698-m03) Calling .PreCreateCheck
	I0729 11:32:26.493474  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetConfigRaw
	I0729 11:32:26.493862  135944 main.go:141] libmachine: Creating machine...
	I0729 11:32:26.493874  135944 main.go:141] libmachine: (ha-691698-m03) Calling .Create
	I0729 11:32:26.494029  135944 main.go:141] libmachine: (ha-691698-m03) Creating KVM machine...
	I0729 11:32:26.495358  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found existing default KVM network
	I0729 11:32:26.495463  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found existing private KVM network mk-ha-691698
	I0729 11:32:26.495589  135944 main.go:141] libmachine: (ha-691698-m03) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03 ...
	I0729 11:32:26.495614  135944 main.go:141] libmachine: (ha-691698-m03) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:32:26.495664  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:26.495569  136723 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:32:26.495788  135944 main.go:141] libmachine: (ha-691698-m03) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:32:26.735242  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:26.735091  136723 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa...
	I0729 11:32:27.006279  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:27.006119  136723 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/ha-691698-m03.rawdisk...
	I0729 11:32:27.006308  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Writing magic tar header
	I0729 11:32:27.006324  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Writing SSH key tar header
	I0729 11:32:27.006338  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:27.006234  136723 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03 ...
	I0729 11:32:27.006353  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03
	I0729 11:32:27.006379  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03 (perms=drwx------)
	I0729 11:32:27.006391  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 11:32:27.006405  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:32:27.006414  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 11:32:27.006424  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:32:27.006434  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:32:27.006445  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:32:27.006463  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home
	I0729 11:32:27.006480  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Skipping /home - not owner
	I0729 11:32:27.006491  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 11:32:27.006499  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 11:32:27.006504  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:32:27.006513  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:32:27.006520  135944 main.go:141] libmachine: (ha-691698-m03) Creating domain...
	I0729 11:32:27.007380  135944 main.go:141] libmachine: (ha-691698-m03) define libvirt domain using xml: 
	I0729 11:32:27.007410  135944 main.go:141] libmachine: (ha-691698-m03) <domain type='kvm'>
	I0729 11:32:27.007421  135944 main.go:141] libmachine: (ha-691698-m03)   <name>ha-691698-m03</name>
	I0729 11:32:27.007433  135944 main.go:141] libmachine: (ha-691698-m03)   <memory unit='MiB'>2200</memory>
	I0729 11:32:27.007442  135944 main.go:141] libmachine: (ha-691698-m03)   <vcpu>2</vcpu>
	I0729 11:32:27.007447  135944 main.go:141] libmachine: (ha-691698-m03)   <features>
	I0729 11:32:27.007455  135944 main.go:141] libmachine: (ha-691698-m03)     <acpi/>
	I0729 11:32:27.007460  135944 main.go:141] libmachine: (ha-691698-m03)     <apic/>
	I0729 11:32:27.007467  135944 main.go:141] libmachine: (ha-691698-m03)     <pae/>
	I0729 11:32:27.007471  135944 main.go:141] libmachine: (ha-691698-m03)     
	I0729 11:32:27.007478  135944 main.go:141] libmachine: (ha-691698-m03)   </features>
	I0729 11:32:27.007484  135944 main.go:141] libmachine: (ha-691698-m03)   <cpu mode='host-passthrough'>
	I0729 11:32:27.007489  135944 main.go:141] libmachine: (ha-691698-m03)   
	I0729 11:32:27.007494  135944 main.go:141] libmachine: (ha-691698-m03)   </cpu>
	I0729 11:32:27.007499  135944 main.go:141] libmachine: (ha-691698-m03)   <os>
	I0729 11:32:27.007506  135944 main.go:141] libmachine: (ha-691698-m03)     <type>hvm</type>
	I0729 11:32:27.007538  135944 main.go:141] libmachine: (ha-691698-m03)     <boot dev='cdrom'/>
	I0729 11:32:27.007562  135944 main.go:141] libmachine: (ha-691698-m03)     <boot dev='hd'/>
	I0729 11:32:27.007575  135944 main.go:141] libmachine: (ha-691698-m03)     <bootmenu enable='no'/>
	I0729 11:32:27.007583  135944 main.go:141] libmachine: (ha-691698-m03)   </os>
	I0729 11:32:27.007593  135944 main.go:141] libmachine: (ha-691698-m03)   <devices>
	I0729 11:32:27.007606  135944 main.go:141] libmachine: (ha-691698-m03)     <disk type='file' device='cdrom'>
	I0729 11:32:27.007623  135944 main.go:141] libmachine: (ha-691698-m03)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/boot2docker.iso'/>
	I0729 11:32:27.007639  135944 main.go:141] libmachine: (ha-691698-m03)       <target dev='hdc' bus='scsi'/>
	I0729 11:32:27.007651  135944 main.go:141] libmachine: (ha-691698-m03)       <readonly/>
	I0729 11:32:27.007662  135944 main.go:141] libmachine: (ha-691698-m03)     </disk>
	I0729 11:32:27.007674  135944 main.go:141] libmachine: (ha-691698-m03)     <disk type='file' device='disk'>
	I0729 11:32:27.007687  135944 main.go:141] libmachine: (ha-691698-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:32:27.007704  135944 main.go:141] libmachine: (ha-691698-m03)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/ha-691698-m03.rawdisk'/>
	I0729 11:32:27.007720  135944 main.go:141] libmachine: (ha-691698-m03)       <target dev='hda' bus='virtio'/>
	I0729 11:32:27.007732  135944 main.go:141] libmachine: (ha-691698-m03)     </disk>
	I0729 11:32:27.007743  135944 main.go:141] libmachine: (ha-691698-m03)     <interface type='network'>
	I0729 11:32:27.007753  135944 main.go:141] libmachine: (ha-691698-m03)       <source network='mk-ha-691698'/>
	I0729 11:32:27.007768  135944 main.go:141] libmachine: (ha-691698-m03)       <model type='virtio'/>
	I0729 11:32:27.007781  135944 main.go:141] libmachine: (ha-691698-m03)     </interface>
	I0729 11:32:27.007796  135944 main.go:141] libmachine: (ha-691698-m03)     <interface type='network'>
	I0729 11:32:27.007810  135944 main.go:141] libmachine: (ha-691698-m03)       <source network='default'/>
	I0729 11:32:27.007823  135944 main.go:141] libmachine: (ha-691698-m03)       <model type='virtio'/>
	I0729 11:32:27.007839  135944 main.go:141] libmachine: (ha-691698-m03)     </interface>
	I0729 11:32:27.007849  135944 main.go:141] libmachine: (ha-691698-m03)     <serial type='pty'>
	I0729 11:32:27.007859  135944 main.go:141] libmachine: (ha-691698-m03)       <target port='0'/>
	I0729 11:32:27.007873  135944 main.go:141] libmachine: (ha-691698-m03)     </serial>
	I0729 11:32:27.007885  135944 main.go:141] libmachine: (ha-691698-m03)     <console type='pty'>
	I0729 11:32:27.007897  135944 main.go:141] libmachine: (ha-691698-m03)       <target type='serial' port='0'/>
	I0729 11:32:27.007909  135944 main.go:141] libmachine: (ha-691698-m03)     </console>
	I0729 11:32:27.007919  135944 main.go:141] libmachine: (ha-691698-m03)     <rng model='virtio'>
	I0729 11:32:27.007932  135944 main.go:141] libmachine: (ha-691698-m03)       <backend model='random'>/dev/random</backend>
	I0729 11:32:27.007946  135944 main.go:141] libmachine: (ha-691698-m03)     </rng>
	I0729 11:32:27.007957  135944 main.go:141] libmachine: (ha-691698-m03)     
	I0729 11:32:27.007967  135944 main.go:141] libmachine: (ha-691698-m03)     
	I0729 11:32:27.007979  135944 main.go:141] libmachine: (ha-691698-m03)   </devices>
	I0729 11:32:27.007987  135944 main.go:141] libmachine: (ha-691698-m03) </domain>
	I0729 11:32:27.007999  135944 main.go:141] libmachine: (ha-691698-m03) 
	I0729 11:32:27.014811  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:a6:7f:ab in network default
	I0729 11:32:27.015438  135944 main.go:141] libmachine: (ha-691698-m03) Ensuring networks are active...
	I0729 11:32:27.015464  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:27.016179  135944 main.go:141] libmachine: (ha-691698-m03) Ensuring network default is active
	I0729 11:32:27.016502  135944 main.go:141] libmachine: (ha-691698-m03) Ensuring network mk-ha-691698 is active
	I0729 11:32:27.016836  135944 main.go:141] libmachine: (ha-691698-m03) Getting domain xml...
	I0729 11:32:27.017577  135944 main.go:141] libmachine: (ha-691698-m03) Creating domain...
	I0729 11:32:28.250903  135944 main.go:141] libmachine: (ha-691698-m03) Waiting to get IP...
	I0729 11:32:28.251767  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:28.252191  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:28.252218  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:28.252145  136723 retry.go:31] will retry after 253.703332ms: waiting for machine to come up
	I0729 11:32:28.507702  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:28.508150  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:28.508180  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:28.508098  136723 retry.go:31] will retry after 267.484872ms: waiting for machine to come up
	I0729 11:32:28.777566  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:28.777996  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:28.778023  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:28.777948  136723 retry.go:31] will retry after 341.397216ms: waiting for machine to come up
	I0729 11:32:29.120621  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:29.121220  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:29.121246  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:29.121176  136723 retry.go:31] will retry after 608.777311ms: waiting for machine to come up
	I0729 11:32:29.731560  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:29.732037  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:29.732101  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:29.731998  136723 retry.go:31] will retry after 693.26674ms: waiting for machine to come up
	I0729 11:32:30.426477  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:30.426858  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:30.426886  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:30.426825  136723 retry.go:31] will retry after 791.149999ms: waiting for machine to come up
	I0729 11:32:31.219306  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:31.219746  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:31.219778  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:31.219702  136723 retry.go:31] will retry after 904.929817ms: waiting for machine to come up
	I0729 11:32:32.126018  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:32.126502  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:32.126541  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:32.126449  136723 retry.go:31] will retry after 1.220150284s: waiting for machine to come up
	I0729 11:32:33.348801  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:33.349346  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:33.349373  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:33.349288  136723 retry.go:31] will retry after 1.438498563s: waiting for machine to come up
	I0729 11:32:34.789836  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:34.790306  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:34.790335  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:34.790267  136723 retry.go:31] will retry after 1.804831632s: waiting for machine to come up
	I0729 11:32:36.596807  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:36.597242  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:36.597271  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:36.597191  136723 retry.go:31] will retry after 2.583018327s: waiting for machine to come up
	I0729 11:32:39.182967  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:39.183479  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:39.183505  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:39.183431  136723 retry.go:31] will retry after 2.35917847s: waiting for machine to come up
	I0729 11:32:41.543809  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:41.544193  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:41.544216  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:41.544148  136723 retry.go:31] will retry after 3.772141656s: waiting for machine to come up
	I0729 11:32:45.321108  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:45.321512  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:45.321536  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:45.321469  136723 retry.go:31] will retry after 4.123061195s: waiting for machine to come up
	I0729 11:32:49.447711  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.448117  135944 main.go:141] libmachine: (ha-691698-m03) Found IP for machine: 192.168.39.23
	I0729 11:32:49.448137  135944 main.go:141] libmachine: (ha-691698-m03) Reserving static IP address...
	I0729 11:32:49.448147  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has current primary IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.448538  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find host DHCP lease matching {name: "ha-691698-m03", mac: "52:54:00:67:96:46", ip: "192.168.39.23"} in network mk-ha-691698
	I0729 11:32:49.522499  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Getting to WaitForSSH function...
	I0729 11:32:49.522531  135944 main.go:141] libmachine: (ha-691698-m03) Reserved static IP address: 192.168.39.23
	I0729 11:32:49.522552  135944 main.go:141] libmachine: (ha-691698-m03) Waiting for SSH to be available...
	I0729 11:32:49.524782  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.525242  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.525274  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.525424  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Using SSH client type: external
	I0729 11:32:49.525456  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa (-rw-------)
	I0729 11:32:49.525485  135944 main.go:141] libmachine: (ha-691698-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:32:49.525499  135944 main.go:141] libmachine: (ha-691698-m03) DBG | About to run SSH command:
	I0729 11:32:49.525623  135944 main.go:141] libmachine: (ha-691698-m03) DBG | exit 0
	I0729 11:32:49.652774  135944 main.go:141] libmachine: (ha-691698-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 11:32:49.653029  135944 main.go:141] libmachine: (ha-691698-m03) KVM machine creation complete!
	I0729 11:32:49.653317  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetConfigRaw
	I0729 11:32:49.653904  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:49.654108  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:49.654298  135944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:32:49.654313  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:32:49.655656  135944 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:32:49.655671  135944 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:32:49.655677  135944 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:32:49.655683  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:49.658019  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.658500  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.658530  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.658781  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:49.658981  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.659145  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.659296  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:49.659588  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:49.659819  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:49.659831  135944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:32:49.764235  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:32:49.764267  135944 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:32:49.764276  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:49.766999  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.767350  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.767373  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.767587  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:49.767767  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.767950  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.768118  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:49.768286  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:49.768443  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:49.768453  135944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:32:49.873362  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:32:49.873422  135944 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:32:49.873429  135944 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:32:49.873438  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:49.873732  135944 buildroot.go:166] provisioning hostname "ha-691698-m03"
	I0729 11:32:49.873756  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:49.873956  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:49.876382  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.876755  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.876781  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.876951  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:49.877121  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.877260  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.877413  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:49.877597  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:49.877762  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:49.877774  135944 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698-m03 && echo "ha-691698-m03" | sudo tee /etc/hostname
	I0729 11:32:49.998655  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698-m03
	
	I0729 11:32:49.998699  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.001688  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.002124  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.002152  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.002368  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.002574  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.002738  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.002886  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.003045  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:50.003236  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:50.003252  135944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:32:50.117832  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:32:50.117868  135944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:32:50.117890  135944 buildroot.go:174] setting up certificates
	I0729 11:32:50.117911  135944 provision.go:84] configureAuth start
	I0729 11:32:50.117925  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:50.118204  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:50.121640  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.122058  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.122097  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.122265  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.124448  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.124802  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.124828  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.125021  135944 provision.go:143] copyHostCerts
	I0729 11:32:50.125052  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:32:50.125085  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:32:50.125094  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:32:50.125158  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:32:50.125227  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:32:50.125244  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:32:50.125250  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:32:50.125272  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:32:50.125358  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:32:50.125378  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:32:50.125382  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:32:50.125403  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:32:50.125452  135944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698-m03 san=[127.0.0.1 192.168.39.23 ha-691698-m03 localhost minikube]
	I0729 11:32:50.523937  135944 provision.go:177] copyRemoteCerts
	I0729 11:32:50.523997  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:32:50.524022  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.526913  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.527358  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.527384  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.527554  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.527746  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.527948  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.528143  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:50.614253  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:32:50.614328  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 11:32:50.638415  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:32:50.638497  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:32:50.661486  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:32:50.661580  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:32:50.683745  135944 provision.go:87] duration metric: took 565.817341ms to configureAuth
	I0729 11:32:50.683774  135944 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:32:50.684051  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:50.684142  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.686743  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.687151  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.687192  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.687406  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.687636  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.687828  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.687953  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.688070  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:50.688256  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:50.688270  135944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:32:50.949315  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:32:50.949343  135944 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:32:50.949352  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetURL
	I0729 11:32:50.950652  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Using libvirt version 6000000
	I0729 11:32:50.952621  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.952944  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.953002  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.953164  135944 main.go:141] libmachine: Docker is up and running!
	I0729 11:32:50.953182  135944 main.go:141] libmachine: Reticulating splines...
	I0729 11:32:50.953191  135944 client.go:171] duration metric: took 24.460085955s to LocalClient.Create
	I0729 11:32:50.953218  135944 start.go:167] duration metric: took 24.460171474s to libmachine.API.Create "ha-691698"
	I0729 11:32:50.953228  135944 start.go:293] postStartSetup for "ha-691698-m03" (driver="kvm2")
	I0729 11:32:50.953238  135944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:32:50.953264  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:50.953522  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:32:50.953550  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.955584  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.955929  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.955954  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.956152  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.956340  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.956491  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.956628  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:51.038978  135944 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:32:51.043053  135944 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:32:51.043084  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:32:51.043178  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:32:51.043250  135944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:32:51.043260  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:32:51.043337  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:32:51.052205  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:32:51.074936  135944 start.go:296] duration metric: took 121.692957ms for postStartSetup
	I0729 11:32:51.074982  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetConfigRaw
	I0729 11:32:51.075567  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:51.078071  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.078474  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.078497  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.078731  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:32:51.078948  135944 start.go:128] duration metric: took 24.604669765s to createHost
	I0729 11:32:51.078972  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:51.081210  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.081480  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.081503  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.081634  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:51.081805  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.081962  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.082091  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:51.082250  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:51.082415  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:51.082426  135944 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:32:51.193768  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252771.171839295
	
	I0729 11:32:51.193792  135944 fix.go:216] guest clock: 1722252771.171839295
	I0729 11:32:51.193799  135944 fix.go:229] Guest: 2024-07-29 11:32:51.171839295 +0000 UTC Remote: 2024-07-29 11:32:51.078960346 +0000 UTC m=+152.005129423 (delta=92.878949ms)
	I0729 11:32:51.193821  135944 fix.go:200] guest clock delta is within tolerance: 92.878949ms
	I0729 11:32:51.193827  135944 start.go:83] releasing machines lock for "ha-691698-m03", held for 24.719660407s
	I0729 11:32:51.193851  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.194135  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:51.196816  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.197215  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.197257  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.199300  135944 out.go:177] * Found network options:
	I0729 11:32:51.200740  135944 out.go:177]   - NO_PROXY=192.168.39.244,192.168.39.5
	W0729 11:32:51.201894  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 11:32:51.201917  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:32:51.201954  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.202485  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.202701  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.202816  135944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:32:51.202861  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	W0729 11:32:51.202930  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 11:32:51.202958  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:32:51.203024  135944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:32:51.203048  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:51.205679  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206115  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.206141  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206160  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206328  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:51.206482  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.206614  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:51.206648  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.206672  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206757  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:51.206815  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:51.206960  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.207088  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:51.207219  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:51.439362  135944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:32:51.445244  135944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:32:51.445322  135944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:32:51.462422  135944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:32:51.462454  135944 start.go:495] detecting cgroup driver to use...
	I0729 11:32:51.462531  135944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:32:51.478560  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:32:51.492786  135944 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:32:51.492852  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:32:51.506773  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:32:51.519525  135944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:32:51.635696  135944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:32:51.781569  135944 docker.go:233] disabling docker service ...
	I0729 11:32:51.781659  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:32:51.797897  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:32:51.812185  135944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:32:51.961731  135944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:32:52.079126  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:32:52.093096  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:32:52.111134  135944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:32:52.111200  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.120915  135944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:32:52.120997  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.130853  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.140645  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.149934  135944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:32:52.159520  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.168747  135944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.184168  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.193278  135944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:32:52.201924  135944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:32:52.201981  135944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:32:52.213583  135944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:32:52.222229  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:32:52.332876  135944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:32:52.463664  135944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:32:52.463751  135944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:32:52.468089  135944 start.go:563] Will wait 60s for crictl version
	I0729 11:32:52.468152  135944 ssh_runner.go:195] Run: which crictl
	I0729 11:32:52.471589  135944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:32:52.507852  135944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:32:52.507930  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:32:52.537199  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:32:52.564742  135944 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:32:52.566072  135944 out.go:177]   - env NO_PROXY=192.168.39.244
	I0729 11:32:52.567338  135944 out.go:177]   - env NO_PROXY=192.168.39.244,192.168.39.5
	I0729 11:32:52.568500  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:52.571227  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:52.571511  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:52.571539  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:52.571772  135944 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:32:52.575623  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:32:52.587039  135944 mustload.go:65] Loading cluster: ha-691698
	I0729 11:32:52.587279  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:52.587534  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:52.587579  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:52.602592  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
	I0729 11:32:52.603149  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:52.603576  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:52.603596  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:52.603928  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:52.604117  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:32:52.605606  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:32:52.606003  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:52.606048  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:52.622153  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0729 11:32:52.622543  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:52.623029  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:52.623049  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:52.623325  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:52.623519  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:32:52.623689  135944 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.23
	I0729 11:32:52.623702  135944 certs.go:194] generating shared ca certs ...
	I0729 11:32:52.623722  135944 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:32:52.623871  135944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:32:52.623927  135944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:32:52.623952  135944 certs.go:256] generating profile certs ...
	I0729 11:32:52.624078  135944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:32:52.624110  135944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf
	I0729 11:32:52.624132  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.5 192.168.39.23 192.168.39.254]
	I0729 11:32:52.781549  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf ...
	I0729 11:32:52.781603  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf: {Name:mk72a72dfcb0a940636db8277f758a4b89126c0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:32:52.781792  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf ...
	I0729 11:32:52.781815  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf: {Name:mkbbb0d7426fd151fdc24ad3b481afd03426af32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:32:52.781915  135944 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:32:52.782066  135944 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:32:52.782228  135944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:32:52.782248  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:32:52.782266  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:32:52.782286  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:32:52.782305  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:32:52.782322  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:32:52.782340  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:32:52.782357  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:32:52.782372  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:32:52.782440  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:32:52.782481  135944 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:32:52.782494  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:32:52.782525  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:32:52.782555  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:32:52.782593  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:32:52.782644  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:32:52.782682  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:52.782703  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:32:52.782720  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:32:52.782765  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:32:52.785986  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:52.786429  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:32:52.786460  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:52.786632  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:32:52.786842  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:32:52.786981  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:32:52.787130  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:32:52.861367  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 11:32:52.865802  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 11:32:52.875801  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 11:32:52.879467  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 11:32:52.889150  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 11:32:52.892914  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 11:32:52.906958  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 11:32:52.911068  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 11:32:52.921168  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 11:32:52.924834  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 11:32:52.935394  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 11:32:52.939557  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 11:32:52.949542  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:32:52.973517  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:32:52.996464  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:32:53.019282  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:32:53.041165  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 11:32:53.062393  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:32:53.083986  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:32:53.105902  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:32:53.131173  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:32:53.153435  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:32:53.177365  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:32:53.200703  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 11:32:53.215927  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 11:32:53.231438  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 11:32:53.247107  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 11:32:53.262900  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 11:32:53.279050  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 11:32:53.294807  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 11:32:53.311716  135944 ssh_runner.go:195] Run: openssl version
	I0729 11:32:53.317354  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:32:53.327563  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:53.331932  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:53.331989  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:53.337727  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:32:53.348175  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:32:53.358888  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:32:53.363044  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:32:53.363110  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:32:53.368498  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:32:53.378504  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:32:53.388631  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:32:53.392863  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:32:53.392921  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:32:53.398134  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:32:53.408052  135944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:32:53.411580  135944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:32:53.411628  135944 kubeadm.go:934] updating node {m03 192.168.39.23 8443 v1.30.3 crio true true} ...
	I0729 11:32:53.411738  135944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:32:53.411766  135944 kube-vip.go:115] generating kube-vip config ...
	I0729 11:32:53.411801  135944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:32:53.427737  135944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:32:53.427816  135944 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:32:53.427864  135944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:32:53.437078  135944 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 11:32:53.437151  135944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 11:32:53.446286  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 11:32:53.446301  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 11:32:53.446321  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 11:32:53.446327  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:32:53.446364  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:32:53.446405  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:32:53.446310  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:32:53.446516  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:32:53.464828  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:32:53.464874  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 11:32:53.464904  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 11:32:53.464937  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:32:53.464909  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 11:32:53.465024  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 11:32:53.491944  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 11:32:53.491998  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 11:32:54.351521  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 11:32:54.360880  135944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 11:32:54.376607  135944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:32:54.392646  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 11:32:54.408942  135944 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:32:54.412888  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:32:54.425220  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:32:54.541043  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:32:54.566818  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:32:54.567219  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:54.567268  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:54.584426  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0729 11:32:54.584858  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:54.585405  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:54.585428  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:54.585858  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:54.586104  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:32:54.586275  135944 start.go:317] joinCluster: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:32:54.586453  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 11:32:54.586481  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:32:54.589134  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:54.589645  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:32:54.589678  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:54.589806  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:32:54.589989  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:32:54.590150  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:32:54.590293  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:32:54.738790  135944 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:32:54.738841  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token smpn7k.i9836phgoguqneu8 --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443"
	I0729 11:33:16.724322  135944 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token smpn7k.i9836phgoguqneu8 --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443": (21.985447587s)
	I0729 11:33:16.724369  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 11:33:17.380203  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-691698-m03 minikube.k8s.io/updated_at=2024_07_29T11_33_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=ha-691698 minikube.k8s.io/primary=false
	I0729 11:33:17.516985  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-691698-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 11:33:17.616661  135944 start.go:319] duration metric: took 23.03037939s to joinCluster
	I0729 11:33:17.616763  135944 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:33:17.617152  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:33:17.617918  135944 out.go:177] * Verifying Kubernetes components...
	I0729 11:33:17.619414  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:33:17.892282  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:33:17.975213  135944 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:33:17.975533  135944 kapi.go:59] client config for ha-691698: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 11:33:17.975612  135944 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.244:8443
	I0729 11:33:17.975894  135944 node_ready.go:35] waiting up to 6m0s for node "ha-691698-m03" to be "Ready" ...
	I0729 11:33:17.976023  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:17.976037  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:17.976048  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:17.976052  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:17.979508  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:18.476527  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:18.476555  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:18.476567  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:18.476572  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:18.481146  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:18.976451  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:18.976476  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:18.976487  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:18.976493  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:18.979912  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:19.476610  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:19.476632  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:19.476640  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:19.476644  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:19.479619  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:19.977118  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:19.977145  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:19.977156  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:19.977166  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:19.980385  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:19.981052  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:20.476401  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:20.476428  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:20.476439  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:20.476444  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:20.481123  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:20.976819  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:20.976852  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:20.976864  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:20.976870  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:20.980534  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:21.476742  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:21.476763  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:21.476770  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:21.476773  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:21.479679  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:21.976506  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:21.976529  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:21.976538  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:21.976542  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:21.979639  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:22.476482  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:22.476513  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:22.476526  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:22.476540  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:22.480168  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:22.480930  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:22.976210  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:22.976235  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:22.976246  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:22.976251  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:22.979890  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:23.476856  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:23.476889  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:23.476899  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:23.476904  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:23.480180  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:23.976936  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:23.976969  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:23.976979  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:23.976985  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:23.980232  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:24.476901  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:24.476931  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:24.476942  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:24.476948  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:24.480844  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:24.481620  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:24.976755  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:24.976777  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:24.976788  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:24.976795  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:24.979955  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:25.477012  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:25.477036  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:25.477044  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:25.477048  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:25.480644  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:25.977108  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:25.977140  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:25.977149  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:25.977152  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:25.980509  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:26.477158  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:26.477182  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:26.477193  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:26.477198  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:26.480819  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:26.976541  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:26.976563  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:26.976571  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:26.976575  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:26.980063  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:26.980515  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:27.476900  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:27.476924  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:27.476932  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:27.476937  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:27.480865  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:27.976715  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:27.976744  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:27.976756  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:27.976760  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:27.979990  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:28.477061  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:28.477089  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:28.477101  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:28.477106  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:28.480424  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:28.977002  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:28.977026  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:28.977035  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:28.977041  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:28.980428  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:28.980984  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:29.476284  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:29.476311  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:29.476323  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:29.476331  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:29.479745  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:29.977013  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:29.977037  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:29.977046  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:29.977051  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:29.980809  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:30.477161  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:30.477199  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:30.477211  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:30.477215  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:30.483350  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:30.976863  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:30.976894  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:30.976905  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:30.976909  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:30.980365  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:31.476139  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:31.476163  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:31.476172  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:31.476175  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:31.479329  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:31.479918  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:31.976198  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:31.976221  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:31.976230  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:31.976234  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:31.979984  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:32.477101  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:32.477126  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:32.477134  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:32.477138  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:32.480542  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:32.976396  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:32.976431  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:32.976444  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:32.976448  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:32.980223  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:33.476113  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:33.476135  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:33.476141  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:33.476145  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:33.479452  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:33.480127  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:33.976473  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:33.976500  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:33.976514  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:33.976520  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:33.981126  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:34.476861  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:34.476889  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:34.476898  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:34.476903  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:34.480752  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:34.976558  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:34.976582  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:34.976591  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:34.976595  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:34.980049  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.476564  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:35.476596  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.476608  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.476613  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.480040  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.480595  135944 node_ready.go:49] node "ha-691698-m03" has status "Ready":"True"
	I0729 11:33:35.480615  135944 node_ready.go:38] duration metric: took 17.504704382s for node "ha-691698-m03" to be "Ready" ...
	I0729 11:33:35.480623  135944 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:33:35.480698  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:35.480708  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.480716  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.480719  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.487227  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:35.493265  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.493379  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p7zbj
	I0729 11:33:35.493390  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.493401  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.493409  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.496712  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.497384  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:35.497404  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.497414  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.497420  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.500373  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.501076  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.501103  135944 pod_ready.go:81] duration metric: took 7.806838ms for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.501117  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.501209  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r48d8
	I0729 11:33:35.501224  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.501235  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.501241  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.504996  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.505767  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:35.505781  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.505789  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.505793  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.508804  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.509353  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.509376  135944 pod_ready.go:81] duration metric: took 8.248373ms for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.509386  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.509443  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698
	I0729 11:33:35.509450  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.509457  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.509461  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.512285  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.512806  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:35.512821  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.512827  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.512833  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.515362  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.515876  135944 pod_ready.go:92] pod "etcd-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.515893  135944 pod_ready.go:81] duration metric: took 6.500912ms for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.515901  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.515955  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m02
	I0729 11:33:35.515962  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.515969  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.515972  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.519214  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.519869  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:35.519884  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.519890  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.519895  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.522694  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.523140  135944 pod_ready.go:92] pod "etcd-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.523157  135944 pod_ready.go:81] duration metric: took 7.249375ms for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.523167  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.677590  135944 request.go:629] Waited for 154.323479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m03
	I0729 11:33:35.677661  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m03
	I0729 11:33:35.677669  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.677682  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.677691  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.681210  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.877397  135944 request.go:629] Waited for 195.290674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:35.877485  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:35.877493  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.877499  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.877506  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.881156  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.881836  135944 pod_ready.go:92] pod "etcd-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.881857  135944 pod_ready.go:81] duration metric: took 358.684511ms for pod "etcd-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.881872  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.076974  135944 request.go:629] Waited for 195.006786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:33:36.077035  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:33:36.077040  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.077048  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.077051  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.080790  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:36.276792  135944 request.go:629] Waited for 195.282686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:36.276864  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:36.276870  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.276878  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.276883  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.280177  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:36.280821  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:36.280840  135944 pod_ready.go:81] duration metric: took 398.960323ms for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.280850  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.476985  135944 request.go:629] Waited for 196.028731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:33:36.477053  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:33:36.477058  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.477066  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.477071  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.479999  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:36.677027  135944 request.go:629] Waited for 196.44866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:36.677102  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:36.677108  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.677116  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.677121  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.680311  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:36.681057  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:36.681086  135944 pod_ready.go:81] duration metric: took 400.229128ms for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.681101  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.877420  135944 request.go:629] Waited for 196.223189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m03
	I0729 11:33:36.877492  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m03
	I0729 11:33:36.877497  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.877505  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.877512  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.881213  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.077177  135944 request.go:629] Waited for 195.243655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:37.077241  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:37.077248  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.077260  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.077268  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.080628  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.081197  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:37.081219  135944 pod_ready.go:81] duration metric: took 400.111392ms for pod "kube-apiserver-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.081231  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.277319  135944 request.go:629] Waited for 195.994566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:33:37.277380  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:33:37.277385  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.277391  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.277396  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.280777  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.477079  135944 request.go:629] Waited for 195.383768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:37.477158  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:37.477166  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.477184  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.477193  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.480746  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.481697  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:37.481717  135944 pod_ready.go:81] duration metric: took 400.479808ms for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.481728  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.676866  135944 request.go:629] Waited for 195.051558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:33:37.676954  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:33:37.676977  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.676988  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.676999  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.680200  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.877421  135944 request.go:629] Waited for 196.36184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:37.877483  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:37.877489  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.877499  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.877505  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.880784  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.881356  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:37.881375  135944 pod_ready.go:81] duration metric: took 399.640955ms for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.881388  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.077575  135944 request.go:629] Waited for 196.085992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m03
	I0729 11:33:38.077638  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m03
	I0729 11:33:38.077643  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.077651  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.077656  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.081142  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.277262  135944 request.go:629] Waited for 195.361703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:38.277355  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:38.277362  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.277372  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.277381  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.280412  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.280986  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:38.281007  135944 pod_ready.go:81] duration metric: took 399.608004ms for pod "kube-controller-manager-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.281017  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.477188  135944 request.go:629] Waited for 196.091424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:33:38.477250  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:33:38.477255  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.477263  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.477267  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.480865  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.676922  135944 request.go:629] Waited for 195.370243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:38.677028  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:38.677036  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.677047  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.677054  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.680314  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.680910  135944 pod_ready.go:92] pod "kube-proxy-5hn2s" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:38.680933  135944 pod_ready.go:81] duration metric: took 399.909196ms for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.680947  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.877079  135944 request.go:629] Waited for 196.014421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:33:38.877141  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:33:38.877146  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.877155  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.877159  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.880723  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.076665  135944 request.go:629] Waited for 195.263999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:39.076724  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:39.076729  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.076737  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.076741  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.080321  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.081129  135944 pod_ready.go:92] pod "kube-proxy-8p4nc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:39.081150  135944 pod_ready.go:81] duration metric: took 400.191431ms for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.081163  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vd69n" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.277071  135944 request.go:629] Waited for 195.822792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vd69n
	I0729 11:33:39.277155  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vd69n
	I0729 11:33:39.277163  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.277172  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.277178  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.280065  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:39.476952  135944 request.go:629] Waited for 196.215506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:39.477039  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:39.477048  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.477055  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.477062  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.480471  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.481302  135944 pod_ready.go:92] pod "kube-proxy-vd69n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:39.481328  135944 pod_ready.go:81] duration metric: took 400.156619ms for pod "kube-proxy-vd69n" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.481340  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.676662  135944 request.go:629] Waited for 195.245723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:33:39.676727  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:33:39.676734  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.676744  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.676752  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.680109  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.877044  135944 request.go:629] Waited for 196.377501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:39.877125  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:39.877134  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.877148  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.877158  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.880646  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.881175  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:39.881195  135944 pod_ready.go:81] duration metric: took 399.847201ms for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.881208  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.077415  135944 request.go:629] Waited for 196.12709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:33:40.077490  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:33:40.077495  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.077504  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.077509  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.082182  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:40.277261  135944 request.go:629] Waited for 194.338625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:40.277316  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:40.277321  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.277332  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.277337  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.280480  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:40.281181  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:40.281201  135944 pod_ready.go:81] duration metric: took 399.985434ms for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.281211  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.477244  135944 request.go:629] Waited for 195.927385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m03
	I0729 11:33:40.477327  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m03
	I0729 11:33:40.477340  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.477351  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.477358  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.482353  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:40.677394  135944 request.go:629] Waited for 194.413012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:40.677474  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:40.677481  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.677491  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.677496  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.681641  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:40.682503  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:40.682528  135944 pod_ready.go:81] duration metric: took 401.308999ms for pod "kube-scheduler-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.682541  135944 pod_ready.go:38] duration metric: took 5.20190254s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:33:40.682558  135944 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:33:40.682613  135944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:33:40.698690  135944 api_server.go:72] duration metric: took 23.081883659s to wait for apiserver process to appear ...
	I0729 11:33:40.698720  135944 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:33:40.698744  135944 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0729 11:33:40.703766  135944 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0729 11:33:40.703848  135944 round_trippers.go:463] GET https://192.168.39.244:8443/version
	I0729 11:33:40.703856  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.703864  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.703870  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.705000  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:33:40.705070  135944 api_server.go:141] control plane version: v1.30.3
	I0729 11:33:40.705087  135944 api_server.go:131] duration metric: took 6.35952ms to wait for apiserver health ...
	I0729 11:33:40.705095  135944 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:33:40.876860  135944 request.go:629] Waited for 171.677496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:40.876948  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:40.876956  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.876986  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.876994  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.883957  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:40.890456  135944 system_pods.go:59] 24 kube-system pods found
	I0729 11:33:40.890494  135944 system_pods.go:61] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:33:40.890501  135944 system_pods.go:61] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:33:40.890506  135944 system_pods.go:61] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:33:40.890512  135944 system_pods.go:61] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:33:40.890517  135944 system_pods.go:61] "etcd-ha-691698-m03" [b8bce546-d13c-4402-b1d4-d2f0d00aba09] Running
	I0729 11:33:40.890521  135944 system_pods.go:61] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:33:40.890526  135944 system_pods.go:61] "kindnet-n929l" [02c92d04-dd42-46c2-9033-5306d7490e0f] Running
	I0729 11:33:40.890530  135944 system_pods.go:61] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:33:40.890535  135944 system_pods.go:61] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:33:40.890546  135944 system_pods.go:61] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:33:40.890556  135944 system_pods.go:61] "kube-apiserver-ha-691698-m03" [66ea3cca-4a77-4756-855a-b34c2e420ca7] Running
	I0729 11:33:40.890561  135944 system_pods.go:61] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:33:40.890565  135944 system_pods.go:61] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:33:40.890572  135944 system_pods.go:61] "kube-controller-manager-ha-691698-m03" [a0a8f594-4b59-4601-958e-fd524fde33ee] Running
	I0729 11:33:40.890575  135944 system_pods.go:61] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:33:40.890581  135944 system_pods.go:61] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:33:40.890584  135944 system_pods.go:61] "kube-proxy-vd69n" [596d3835-5ab1-4009-a1d3-ccde26b14f32] Running
	I0729 11:33:40.890589  135944 system_pods.go:61] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:33:40.890592  135944 system_pods.go:61] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:33:40.890597  135944 system_pods.go:61] "kube-scheduler-ha-691698-m03" [6519ce66-a98b-4d83-8e81-f1e35896ebdb] Running
	I0729 11:33:40.890601  135944 system_pods.go:61] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:33:40.890604  135944 system_pods.go:61] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:33:40.890607  135944 system_pods.go:61] "kube-vip-ha-691698-m03" [0648712d-e530-460f-b39a-c8a61229587f] Running
	I0729 11:33:40.890611  135944 system_pods.go:61] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:33:40.890620  135944 system_pods.go:74] duration metric: took 185.512939ms to wait for pod list to return data ...
	I0729 11:33:40.890630  135944 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:33:41.077052  135944 request.go:629] Waited for 186.33972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:33:41.077128  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:33:41.077136  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:41.077147  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:41.077157  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:41.080477  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:41.080599  135944 default_sa.go:45] found service account: "default"
	I0729 11:33:41.080613  135944 default_sa.go:55] duration metric: took 189.975552ms for default service account to be created ...
	I0729 11:33:41.080621  135944 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:33:41.277084  135944 request.go:629] Waited for 196.39084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:41.277169  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:41.277178  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:41.277186  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:41.277193  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:41.283853  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:41.290170  135944 system_pods.go:86] 24 kube-system pods found
	I0729 11:33:41.290199  135944 system_pods.go:89] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:33:41.290205  135944 system_pods.go:89] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:33:41.290210  135944 system_pods.go:89] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:33:41.290214  135944 system_pods.go:89] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:33:41.290218  135944 system_pods.go:89] "etcd-ha-691698-m03" [b8bce546-d13c-4402-b1d4-d2f0d00aba09] Running
	I0729 11:33:41.290222  135944 system_pods.go:89] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:33:41.290225  135944 system_pods.go:89] "kindnet-n929l" [02c92d04-dd42-46c2-9033-5306d7490e0f] Running
	I0729 11:33:41.290229  135944 system_pods.go:89] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:33:41.290233  135944 system_pods.go:89] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:33:41.290236  135944 system_pods.go:89] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:33:41.290240  135944 system_pods.go:89] "kube-apiserver-ha-691698-m03" [66ea3cca-4a77-4756-855a-b34c2e420ca7] Running
	I0729 11:33:41.290244  135944 system_pods.go:89] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:33:41.290248  135944 system_pods.go:89] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:33:41.290253  135944 system_pods.go:89] "kube-controller-manager-ha-691698-m03" [a0a8f594-4b59-4601-958e-fd524fde33ee] Running
	I0729 11:33:41.290257  135944 system_pods.go:89] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:33:41.290264  135944 system_pods.go:89] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:33:41.290268  135944 system_pods.go:89] "kube-proxy-vd69n" [596d3835-5ab1-4009-a1d3-ccde26b14f32] Running
	I0729 11:33:41.290274  135944 system_pods.go:89] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:33:41.290278  135944 system_pods.go:89] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:33:41.290284  135944 system_pods.go:89] "kube-scheduler-ha-691698-m03" [6519ce66-a98b-4d83-8e81-f1e35896ebdb] Running
	I0729 11:33:41.290288  135944 system_pods.go:89] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:33:41.290293  135944 system_pods.go:89] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:33:41.290297  135944 system_pods.go:89] "kube-vip-ha-691698-m03" [0648712d-e530-460f-b39a-c8a61229587f] Running
	I0729 11:33:41.290300  135944 system_pods.go:89] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:33:41.290310  135944 system_pods.go:126] duration metric: took 209.683049ms to wait for k8s-apps to be running ...
	I0729 11:33:41.290320  135944 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:33:41.290363  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:33:41.304363  135944 system_svc.go:56] duration metric: took 14.026145ms WaitForService to wait for kubelet
	I0729 11:33:41.304397  135944 kubeadm.go:582] duration metric: took 23.687596649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:33:41.304421  135944 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:33:41.476895  135944 request.go:629] Waited for 172.363425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes
	I0729 11:33:41.477022  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes
	I0729 11:33:41.477033  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:41.477041  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:41.477046  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:41.480521  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:41.481556  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:33:41.481581  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:33:41.481595  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:33:41.481600  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:33:41.481605  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:33:41.481610  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:33:41.481615  135944 node_conditions.go:105] duration metric: took 177.187937ms to run NodePressure ...
	I0729 11:33:41.481631  135944 start.go:241] waiting for startup goroutines ...
	I0729 11:33:41.481660  135944 start.go:255] writing updated cluster config ...
	I0729 11:33:41.481964  135944 ssh_runner.go:195] Run: rm -f paused
	I0729 11:33:41.532361  135944 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:33:41.534444  135944 out.go:177] * Done! kubectl is now configured to use "ha-691698" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.092237330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253040092104888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=210bc882-860f-4874-8dfa-4ef3d178c547 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.093887946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2483c2d8-a786-4b76-8e59-0f471c889196 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.093967804Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2483c2d8-a786-4b76-8e59-0f471c889196 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.094261888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2483c2d8-a786-4b76-8e59-0f471c889196 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.133240421Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b943521f-150b-4e89-8944-66c667446c5f name=/runtime.v1.RuntimeService/Version
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.133323383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b943521f-150b-4e89-8944-66c667446c5f name=/runtime.v1.RuntimeService/Version
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.134446842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23da699c-7a46-425e-9f5e-acc48bf860ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.134929954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253040134908284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23da699c-7a46-425e-9f5e-acc48bf860ef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.135397317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e14047cc-3d2c-41c6-b4ad-db1ab19ad4d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.135471052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e14047cc-3d2c-41c6-b4ad-db1ab19ad4d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.135732219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e14047cc-3d2c-41c6-b4ad-db1ab19ad4d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.175800138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7b7a485-4b8c-48d8-9a1e-5c4b99b983a0 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.175926554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7b7a485-4b8c-48d8-9a1e-5c4b99b983a0 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.176945974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2954966a-06c8-4c02-9377-098847f20c7b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.177431028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253040177408422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2954966a-06c8-4c02-9377-098847f20c7b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.177854589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb2f76bc-04de-4350-9730-84c9bb440d41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.177928464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb2f76bc-04de-4350-9730-84c9bb440d41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.178196447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb2f76bc-04de-4350-9730-84c9bb440d41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.214365450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=276ec13a-6675-421a-a070-1943e757ed92 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.214440356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=276ec13a-6675-421a-a070-1943e757ed92 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.215662812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d498fb9-5b2f-4dc8-b7af-d7b0e0965a2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.216302156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253040216275385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d498fb9-5b2f-4dc8-b7af-d7b0e0965a2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.216828699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7729390-c2d4-43d3-b4d8-f86ef5ae1a80 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.216902297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7729390-c2d4-43d3-b4d8-f86ef5ae1a80 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:37:20 ha-691698 crio[683]: time="2024-07-29 11:37:20.217135254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7729390-c2d4-43d3-b4d8-f86ef5ae1a80 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	238fb47cd6e36       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   764f56dfda80f       busybox-fc5497c4f-t69zw
	0d819119d1f04       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   d32f436d019c4       coredns-7db6d8ff4d-r48d8
	833566290ab18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   8d892f55e419c       coredns-7db6d8ff4d-p7zbj
	47dc452e397f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   0f5ab4507eb64       storage-provisioner
	2c476db3ff154       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   ff04fbe0e7040       kindnet-gl972
	2da9ca3c5237b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   7978ad5ef51fb       kube-proxy-5hn2s
	53c8d491a07d0       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   03d8086686623       kube-vip-ha-691698
	24326f59696b1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   f7a6dae3abd7e       kube-scheduler-ha-691698
	2c63f4ac92339       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   cd880d0b14110       kube-apiserver-ha-691698
	1d0e28e4eb5d8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   476f4c4be9581       etcd-ha-691698
	0b984e1e87ad3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   dba80440eb6ef       kube-controller-manager-ha-691698
	
	
	==> coredns [0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53] <==
	[INFO] 10.244.2.2:58368 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000428526s
	[INFO] 10.244.0.4:60406 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00012543s
	[INFO] 10.244.0.4:50254 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000059389s
	[INFO] 10.244.0.4:48812 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00188043s
	[INFO] 10.244.1.2:43643 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173662s
	[INFO] 10.244.1.2:52260 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003470125s
	[INFO] 10.244.1.2:54673 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136747s
	[INFO] 10.244.2.2:34318 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000273221s
	[INFO] 10.244.2.2:60262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476515s
	[INFO] 10.244.2.2:57052 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142747s
	[INFO] 10.244.2.2:54120 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108997s
	[INFO] 10.244.1.2:44298 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081482s
	[INFO] 10.244.1.2:57785 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116033s
	[INFO] 10.244.2.2:38389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154869s
	[INFO] 10.244.2.2:33473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139061s
	[INFO] 10.244.2.2:36153 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064585s
	[INFO] 10.244.0.4:36379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097216s
	[INFO] 10.244.0.4:47834 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063726s
	[INFO] 10.244.1.2:33111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120166s
	[INFO] 10.244.2.2:43983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122897s
	[INFO] 10.244.2.2:35012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148813s
	[INFO] 10.244.2.2:40714 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011869s
	[INFO] 10.244.0.4:44215 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086794s
	[INFO] 10.244.0.4:38040 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005703s
	[INFO] 10.244.0.4:50677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108307s
	
	
	==> coredns [833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486] <==
	[INFO] 10.244.1.2:56075 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163005s
	[INFO] 10.244.1.2:34415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116898s
	[INFO] 10.244.1.2:36747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189708s
	[INFO] 10.244.2.2:38790 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020996s
	[INFO] 10.244.2.2:56602 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001921401s
	[INFO] 10.244.2.2:34056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219216s
	[INFO] 10.244.2.2:60410 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161507s
	[INFO] 10.244.0.4:59522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147092s
	[INFO] 10.244.0.4:33605 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742361s
	[INFO] 10.244.0.4:54567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076754s
	[INFO] 10.244.0.4:35616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072926s
	[INFO] 10.244.0.4:50762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001270357s
	[INFO] 10.244.0.4:56719 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059193s
	[INFO] 10.244.0.4:42114 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124091s
	[INFO] 10.244.0.4:54680 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047725s
	[INFO] 10.244.1.2:33443 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111093s
	[INFO] 10.244.1.2:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102839s
	[INFO] 10.244.2.2:47142 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084964s
	[INFO] 10.244.0.4:35741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015832s
	[INFO] 10.244.0.4:39817 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103529s
	[INFO] 10.244.1.2:45931 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134869s
	[INFO] 10.244.1.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000217632s
	[INFO] 10.244.1.2:59273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107311s
	[INFO] 10.244.2.2:49049 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000205027s
	[INFO] 10.244.0.4:42280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127437s
	
	
	==> describe nodes <==
	Name:               ha-691698
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_31_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:30:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:37:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:31:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-691698
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ffcbde1a62f4ed28ef2171c0da37339
	  System UUID:                8ffcbde1-a62f-4ed2-8ef2-171c0da37339
	  Boot ID:                    f8eb0442-fda7-4803-ab40-821f5c33cb8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t69zw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-p7zbj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m6s
	  kube-system                 coredns-7db6d8ff4d-r48d8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m6s
	  kube-system                 etcd-ha-691698                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m19s
	  kube-system                 kindnet-gl972                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-ha-691698             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-691698    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-proxy-5hn2s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-ha-691698             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-vip-ha-691698                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m4s   kube-proxy       
	  Normal  Starting                 6m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s  kubelet          Node ha-691698 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s  kubelet          Node ha-691698 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s  kubelet          Node ha-691698 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal  NodeReady                5m51s  kubelet          Node ha-691698 status is now: NodeReady
	  Normal  RegisteredNode           5m1s   node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal  RegisteredNode           3m49s  node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	
	
	Name:               ha-691698-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_32_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:32:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:34:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-691698-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c019d6e64b644eff86b333652cd5328b
	  System UUID:                c019d6e6-4b64-4eff-86b3-33652cd5328b
	  Boot ID:                    ffc361c1-a45a-45ad-9852-96429352504d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22qb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-691698-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m17s
	  kube-system                 kindnet-wrx27                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m19s
	  kube-system                 kube-apiserver-ha-691698-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-ha-691698-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-proxy-8p4nc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-scheduler-ha-691698-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-vip-ha-691698-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-691698-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-691698-m02 status is now: NodeNotReady
	
	
	Name:               ha-691698-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_33_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:33:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:37:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-691698-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc0ebb3b7dee46c2bbb6e4b87cde5294
	  System UUID:                dc0ebb3b-7dee-46c2-bbb6-e4b87cde5294
	  Boot ID:                    793cbd49-8fb8-4fa0-9374-8327f823ecfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-72n5l                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-691698-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m4s
	  kube-system                 kindnet-n929l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-apiserver-ha-691698-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-controller-manager-ha-691698-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-vd69n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-ha-691698-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-vip-ha-691698-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-691698-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal  RegisteredNode           3m49s                node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	
	
	Name:               ha-691698-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_34_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:37:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-691698-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 acedffa77bf44161b125b5360bc5ba83
	  System UUID:                acedffa7-7bf4-4161-b125-b5360bc5ba83
	  Boot ID:                    e24b0a1a-2dbd-4235-9799-fdae94d4486d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pknpn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-9k2mb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-691698-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-691698-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 11:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048983] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036911] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696256] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.842909] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.530183] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.170622] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.056672] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055838] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.156855] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147139] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.275583] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.097124] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +4.229544] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063086] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 11:31] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.086846] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.595904] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.192166] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 11:32] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d] <==
	{"level":"warn","ts":"2024-07-29T11:37:20.325068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.483952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.49238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.498046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.502042Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.502859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.506315Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.507273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.510359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.518501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.524898Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.526868Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.532488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.540528Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.543418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.550873Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.557709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.564726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.568201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.571029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.576759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.583469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.589473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:37:20.624562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:37:20 up 6 min,  0 users,  load average: 0.20, 0.18, 0.10
	Linux ha-691698 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756] <==
	I0729 11:36:49.465889       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:36:59.465924       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:36:59.465964       1 main.go:299] handling current node
	I0729 11:36:59.465979       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:36:59.465984       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:36:59.466114       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:36:59.466135       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:36:59.466189       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:36:59.466204       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:37:09.465524       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:37:09.465625       1 main.go:299] handling current node
	I0729 11:37:09.465654       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:37:09.465726       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:37:09.465859       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:37:09.465895       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:37:09.465961       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:37:09.465979       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:37:19.458126       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:37:19.458254       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:37:19.458465       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:37:19.458508       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:37:19.458615       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:37:19.458653       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:37:19.458835       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:37:19.458886       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6] <==
	I0729 11:31:00.064622       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 11:31:00.071166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244]
	I0729 11:31:00.072238       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:31:00.076602       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:31:00.191035       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:31:01.601915       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:31:01.619848       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 11:31:01.633628       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:31:13.500488       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 11:31:14.498332       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 11:33:47.806521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51682: use of closed network connection
	E0729 11:33:47.990492       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51688: use of closed network connection
	E0729 11:33:48.362479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51722: use of closed network connection
	E0729 11:33:48.544790       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51740: use of closed network connection
	E0729 11:33:48.726572       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51764: use of closed network connection
	E0729 11:33:48.910453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51782: use of closed network connection
	E0729 11:33:49.094351       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51798: use of closed network connection
	E0729 11:33:49.275269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51818: use of closed network connection
	E0729 11:33:49.565230       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51850: use of closed network connection
	E0729 11:33:49.753170       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51880: use of closed network connection
	E0729 11:33:49.938538       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51896: use of closed network connection
	E0729 11:33:50.112311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51920: use of closed network connection
	E0729 11:33:50.291182       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51936: use of closed network connection
	E0729 11:33:50.470643       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51952: use of closed network connection
	W0729 11:35:20.087318       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.244]
	
	
	==> kube-controller-manager [0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d] <==
	I0729 11:33:42.483797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.018617ms"
	I0729 11:33:42.483958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.429µs"
	I0729 11:33:42.499060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.73µs"
	I0729 11:33:42.503898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.279µs"
	I0729 11:33:42.602962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.870781ms"
	I0729 11:33:42.751049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="147.982928ms"
	I0729 11:33:42.773524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.412576ms"
	I0729 11:33:42.773737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.862µs"
	I0729 11:33:42.826198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.806913ms"
	I0729 11:33:42.828414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.92µs"
	I0729 11:33:44.284995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.158µs"
	I0729 11:33:45.169101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.028258ms"
	I0729 11:33:45.169966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="208.869µs"
	I0729 11:33:45.470195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.740556ms"
	I0729 11:33:45.470853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="283.717µs"
	I0729 11:33:47.241491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.966998ms"
	I0729 11:33:47.241822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.057µs"
	E0729 11:34:19.801986       1 certificate_controller.go:146] Sync csr-2557s failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2557s": the object has been modified; please apply your changes to the latest version and try again
	I0729 11:34:20.090225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-691698-m04\" does not exist"
	I0729 11:34:20.120158       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-691698-m04" podCIDRs=["10.244.3.0/24"]
	I0729 11:34:23.680106       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-691698-m04"
	I0729 11:34:39.639383       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-691698-m04"
	I0729 11:35:37.066172       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-691698-m04"
	I0729 11:35:37.131448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.969767ms"
	I0729 11:35:37.132185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.299µs"
	
	
	==> kube-proxy [2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323] <==
	I0729 11:31:15.469088       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:31:15.512253       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.244"]
	I0729 11:31:15.584276       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:31:15.584317       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:31:15.584333       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:31:15.587247       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:31:15.587800       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:31:15.587855       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:31:15.589586       1 config.go:192] "Starting service config controller"
	I0729 11:31:15.590577       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:31:15.590875       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:31:15.590911       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:31:15.592642       1 config.go:319] "Starting node config controller"
	I0729 11:31:15.593517       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:31:15.691565       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:31:15.691660       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:31:15.693908       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0] <==
	W0729 11:30:59.393612       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:30:59.393653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:30:59.483988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:30:59.484034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:30:59.504614       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:30:59.504700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:30:59.531800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:30:59.531829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:30:59.549354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:30:59.549416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:30:59.573894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:30:59.573973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:30:59.609853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:30:59.609951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:30:59.676813       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:30:59.676934       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:31:01.526304       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 11:34:20.179491       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pqx6x\": pod kube-proxy-pqx6x is already assigned to node \"ha-691698-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pqx6x" node="ha-691698-m04"
	E0729 11:34:20.180944       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 88b81468-2d64-4496-a593-68698a8a161e(kube-system/kube-proxy-pqx6x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pqx6x"
	E0729 11:34:20.181390       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pqx6x\": pod kube-proxy-pqx6x is already assigned to node \"ha-691698-m04\"" pod="kube-system/kube-proxy-pqx6x"
	I0729 11:34:20.181582       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pqx6x" node="ha-691698-m04"
	E0729 11:34:20.181307       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pknpn\": pod kindnet-pknpn is already assigned to node \"ha-691698-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pknpn" node="ha-691698-m04"
	E0729 11:34:20.186876       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ea8a7c41-23fc-4ded-80ef-41744345895d(kube-system/kindnet-pknpn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pknpn"
	E0729 11:34:20.187114       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pknpn\": pod kindnet-pknpn is already assigned to node \"ha-691698-m04\"" pod="kube-system/kindnet-pknpn"
	I0729 11:34:20.187205       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pknpn" node="ha-691698-m04"
	
	
	==> kubelet <==
	Jul 29 11:33:42 ha-691698 kubelet[1382]: I0729 11:33:42.558923    1382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrsf7\" (UniqueName: \"kubernetes.io/projected/ba70f798-7f59-4cd9-955c-82ce880ebcf9-kube-api-access-vrsf7\") pod \"busybox-fc5497c4f-t69zw\" (UID: \"ba70f798-7f59-4cd9-955c-82ce880ebcf9\") " pod="default/busybox-fc5497c4f-t69zw"
	Jul 29 11:33:43 ha-691698 kubelet[1382]: E0729 11:33:43.709412    1382 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jul 29 11:33:43 ha-691698 kubelet[1382]: E0729 11:33:43.709879    1382 projected.go:200] Error preparing data for projected volume kube-api-access-vrsf7 for pod default/busybox-fc5497c4f-t69zw: failed to sync configmap cache: timed out waiting for the condition
	Jul 29 11:33:43 ha-691698 kubelet[1382]: E0729 11:33:43.710229    1382 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba70f798-7f59-4cd9-955c-82ce880ebcf9-kube-api-access-vrsf7 podName:ba70f798-7f59-4cd9-955c-82ce880ebcf9 nodeName:}" failed. No retries permitted until 2024-07-29 11:33:44.21005177 +0000 UTC m=+162.824252495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vrsf7" (UniqueName: "kubernetes.io/projected/ba70f798-7f59-4cd9-955c-82ce880ebcf9-kube-api-access-vrsf7") pod "busybox-fc5497c4f-t69zw" (UID: "ba70f798-7f59-4cd9-955c-82ce880ebcf9") : failed to sync configmap cache: timed out waiting for the condition
	Jul 29 11:33:47 ha-691698 kubelet[1382]: I0729 11:33:47.200200    1382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-t69zw" podStartSLOduration=3.725352335 podStartE2EDuration="5.200172507s" podCreationTimestamp="2024-07-29 11:33:42 +0000 UTC" firstStartedPulling="2024-07-29 11:33:44.719321186 +0000 UTC m=+163.333521899" lastFinishedPulling="2024-07-29 11:33:46.194141355 +0000 UTC m=+164.808342071" observedRunningTime="2024-07-29 11:33:47.199737465 +0000 UTC m=+165.813938197" watchObservedRunningTime="2024-07-29 11:33:47.200172507 +0000 UTC m=+165.814373239"
	Jul 29 11:34:01 ha-691698 kubelet[1382]: E0729 11:34:01.566975    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:34:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:34:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:34:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:34:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:35:01 ha-691698 kubelet[1382]: E0729 11:35:01.568463    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:35:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:35:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:35:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:35:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:36:01 ha-691698 kubelet[1382]: E0729 11:36:01.568974    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:36:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:36:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:36:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:36:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:37:01 ha-691698 kubelet[1382]: E0729 11:37:01.567565    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:37:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:37:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:37:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:37:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-691698 -n ha-691698
helpers_test.go:261: (dbg) Run:  kubectl --context ha-691698 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (3.201013706s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:25.157174  140756 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:25.157296  140756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:25.157315  140756 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:25.157325  140756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:25.157504  140756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:37:25.157698  140756 out.go:298] Setting JSON to false
	I0729 11:37:25.157732  140756 mustload.go:65] Loading cluster: ha-691698
	I0729 11:37:25.157859  140756 notify.go:220] Checking for updates...
	I0729 11:37:25.158235  140756 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:25.158254  140756 status.go:255] checking status of ha-691698 ...
	I0729 11:37:25.158669  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:25.158710  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:25.176368  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
	I0729 11:37:25.176935  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:25.177525  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:25.177547  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:25.177995  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:25.178274  140756 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:37:25.180132  140756 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:37:25.180148  140756 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:25.180489  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:25.180557  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:25.196311  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
	I0729 11:37:25.196810  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:25.197321  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:25.197341  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:25.197703  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:25.197902  140756 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:37:25.200892  140756 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:25.201284  140756 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:25.201304  140756 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:25.201485  140756 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:25.201756  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:25.201805  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:25.218021  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0729 11:37:25.218539  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:25.219074  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:25.219100  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:25.219414  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:25.219572  140756 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:37:25.219766  140756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:25.219802  140756 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:37:25.222823  140756 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:25.223241  140756 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:25.223273  140756 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:25.223451  140756 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:37:25.223624  140756 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:37:25.223791  140756 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:37:25.223931  140756 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:37:25.308147  140756 ssh_runner.go:195] Run: systemctl --version
	I0729 11:37:25.317164  140756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:25.333401  140756 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:25.333432  140756 api_server.go:166] Checking apiserver status ...
	I0729 11:37:25.333467  140756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:25.349229  140756 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:37:25.359217  140756 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:25.359268  140756 ssh_runner.go:195] Run: ls
	I0729 11:37:25.363561  140756 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:25.369325  140756 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:25.369354  140756 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:37:25.369364  140756 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:25.369381  140756 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:37:25.369712  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:25.369739  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:25.384653  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46421
	I0729 11:37:25.385083  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:25.385619  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:25.385643  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:25.385981  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:25.386173  140756 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:37:25.387832  140756 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:37:25.387847  140756 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:25.388119  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:25.388155  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:25.403127  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44219
	I0729 11:37:25.403609  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:25.404085  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:25.404105  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:25.404495  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:25.404701  140756 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:37:25.407407  140756 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:25.407838  140756 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:25.407868  140756 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:25.407961  140756 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:25.408236  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:25.408259  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:25.423320  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0729 11:37:25.423788  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:25.424288  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:25.424317  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:25.424705  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:25.424903  140756 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:37:25.425126  140756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:25.425149  140756 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:37:25.427903  140756 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:25.428388  140756 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:25.428411  140756 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:25.428620  140756 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:37:25.428810  140756 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:37:25.428975  140756 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:37:25.429125  140756 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	W0729 11:37:27.965284  140756 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:27.965388  140756 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E0729 11:37:27.965404  140756 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:27.965412  140756 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:37:27.965429  140756 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:27.965436  140756 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:37:27.965751  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:27.965793  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:27.981010  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0729 11:37:27.981453  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:27.981940  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:27.981963  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:27.982317  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:27.982486  140756 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:37:27.984005  140756 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:37:27.984036  140756 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:27.984322  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:27.984370  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:27.999898  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
	I0729 11:37:28.000424  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:28.000912  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:28.000938  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:28.001302  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:28.001523  140756 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:37:28.004552  140756 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:28.005018  140756 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:28.005048  140756 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:28.005229  140756 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:28.005599  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:28.005639  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:28.021712  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0729 11:37:28.022189  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:28.022696  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:28.022716  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:28.023025  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:28.023200  140756 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:37:28.023426  140756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:28.023451  140756 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:37:28.026083  140756 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:28.026550  140756 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:28.026579  140756 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:28.026702  140756 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:37:28.026861  140756 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:37:28.026979  140756 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:37:28.027147  140756 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:37:28.108234  140756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:28.124441  140756 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:28.124507  140756 api_server.go:166] Checking apiserver status ...
	I0729 11:37:28.124554  140756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:28.138320  140756 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:37:28.148347  140756 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:28.148417  140756 ssh_runner.go:195] Run: ls
	I0729 11:37:28.154912  140756 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:28.161094  140756 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:28.161130  140756 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:37:28.161141  140756 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:28.161163  140756 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:37:28.161504  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:28.161557  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:28.177949  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43535
	I0729 11:37:28.178425  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:28.178945  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:28.178968  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:28.179256  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:28.179454  140756 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:37:28.181262  140756 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:37:28.181278  140756 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:28.181569  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:28.181616  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:28.196782  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0729 11:37:28.197301  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:28.197771  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:28.197791  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:28.198116  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:28.198357  140756 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:37:28.201380  140756 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:28.201891  140756 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:28.201920  140756 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:28.202090  140756 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:28.202476  140756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:28.202521  140756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:28.218837  140756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I0729 11:37:28.219302  140756 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:28.219833  140756 main.go:141] libmachine: Using API Version  1
	I0729 11:37:28.219859  140756 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:28.220239  140756 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:28.220471  140756 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:37:28.220686  140756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:28.220723  140756 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:37:28.223886  140756 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:28.224297  140756 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:28.224325  140756 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:28.224516  140756 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:37:28.224736  140756 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:37:28.224897  140756 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:37:28.225081  140756 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:37:28.299597  140756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:28.314335  140756 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (5.475119825s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:29.030481  140856 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:29.030832  140856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:29.030866  140856 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:29.030877  140856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:29.031343  140856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:37:29.031620  140856 out.go:298] Setting JSON to false
	I0729 11:37:29.031660  140856 mustload.go:65] Loading cluster: ha-691698
	I0729 11:37:29.031777  140856 notify.go:220] Checking for updates...
	I0729 11:37:29.032229  140856 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:29.032253  140856 status.go:255] checking status of ha-691698 ...
	I0729 11:37:29.032672  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:29.032725  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:29.051701  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0729 11:37:29.052204  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:29.052819  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:29.052858  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:29.053298  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:29.053618  140856 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:37:29.055348  140856 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:37:29.055371  140856 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:29.055807  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:29.055872  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:29.071885  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36591
	I0729 11:37:29.072353  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:29.072946  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:29.072996  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:29.073326  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:29.073534  140856 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:37:29.076364  140856 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:29.076838  140856 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:29.076866  140856 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:29.077081  140856 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:29.077370  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:29.077430  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:29.094404  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0729 11:37:29.094879  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:29.095418  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:29.095443  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:29.095820  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:29.096009  140856 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:37:29.096214  140856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:29.096233  140856 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:37:29.099323  140856 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:29.099570  140856 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:29.099586  140856 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:29.099759  140856 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:37:29.099939  140856 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:37:29.100070  140856 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:37:29.100204  140856 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:37:29.180232  140856 ssh_runner.go:195] Run: systemctl --version
	I0729 11:37:29.186020  140856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:29.199199  140856 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:29.199233  140856 api_server.go:166] Checking apiserver status ...
	I0729 11:37:29.199267  140856 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:29.212531  140856 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:37:29.222147  140856 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:29.222210  140856 ssh_runner.go:195] Run: ls
	I0729 11:37:29.226692  140856 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:29.234607  140856 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:29.234641  140856 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:37:29.234654  140856 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:29.234676  140856 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:37:29.235120  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:29.235152  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:29.250853  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0729 11:37:29.251286  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:29.251791  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:29.251818  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:29.252110  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:29.252294  140856 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:37:29.253826  140856 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:37:29.253847  140856 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:29.254149  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:29.254185  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:29.269082  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0729 11:37:29.269613  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:29.270139  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:29.270164  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:29.270557  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:29.270772  140856 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:37:29.274063  140856 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:29.274477  140856 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:29.274511  140856 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:29.274734  140856 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:29.275102  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:29.275151  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:29.290637  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0729 11:37:29.291136  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:29.291611  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:29.291636  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:29.291966  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:29.292154  140856 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:37:29.292362  140856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:29.292383  140856 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:37:29.294953  140856 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:29.295452  140856 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:29.295502  140856 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:29.295614  140856 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:37:29.295773  140856 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:37:29.295911  140856 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:37:29.296051  140856 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	W0729 11:37:31.037265  140856 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:31.037338  140856 retry.go:31] will retry after 175.422691ms: dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:34.109303  140856 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:34.109403  140856 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E0729 11:37:34.109427  140856 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:34.109441  140856 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:37:34.109463  140856 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:34.109477  140856 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:37:34.109819  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:34.109877  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:34.125980  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42787
	I0729 11:37:34.126495  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:34.127033  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:34.127061  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:34.127408  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:34.127624  140856 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:37:34.129472  140856 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:37:34.129491  140856 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:34.129827  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:34.129880  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:34.146681  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0729 11:37:34.147248  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:34.147730  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:34.147766  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:34.148115  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:34.148315  140856 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:37:34.151467  140856 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:34.152068  140856 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:34.152096  140856 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:34.152301  140856 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:34.152697  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:34.152738  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:34.168575  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0729 11:37:34.169040  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:34.169602  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:34.169622  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:34.169956  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:34.170172  140856 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:37:34.170363  140856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:34.170388  140856 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:37:34.173401  140856 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:34.173815  140856 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:34.173844  140856 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:34.174012  140856 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:37:34.174187  140856 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:37:34.174309  140856 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:37:34.174431  140856 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:37:34.256003  140856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:34.270392  140856 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:34.270430  140856 api_server.go:166] Checking apiserver status ...
	I0729 11:37:34.270481  140856 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:34.283812  140856 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:37:34.293395  140856 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:34.293452  140856 ssh_runner.go:195] Run: ls
	I0729 11:37:34.297937  140856 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:34.302185  140856 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:34.302212  140856 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:37:34.302221  140856 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:34.302236  140856 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:37:34.302542  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:34.302566  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:34.319715  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0729 11:37:34.320361  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:34.320897  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:34.320923  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:34.321234  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:34.321458  140856 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:37:34.323432  140856 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:37:34.323453  140856 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:34.323861  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:34.323900  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:34.339277  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0729 11:37:34.339865  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:34.340432  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:34.340460  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:34.340859  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:34.341091  140856 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:37:34.344085  140856 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:34.344618  140856 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:34.344647  140856 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:34.344795  140856 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:34.345206  140856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:34.345257  140856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:34.360491  140856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0729 11:37:34.360956  140856 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:34.361502  140856 main.go:141] libmachine: Using API Version  1
	I0729 11:37:34.361524  140856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:34.361915  140856 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:34.362145  140856 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:37:34.362375  140856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:34.362395  140856 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:37:34.365563  140856 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:34.365910  140856 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:34.365935  140856 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:34.366104  140856 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:37:34.366308  140856 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:37:34.366487  140856 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:37:34.366648  140856 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:37:34.443959  140856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:34.459319  140856 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (4.769451239s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:35.888115  140955 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:35.888374  140955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:35.888384  140955 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:35.888389  140955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:35.888600  140955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:37:35.888768  140955 out.go:298] Setting JSON to false
	I0729 11:37:35.888794  140955 mustload.go:65] Loading cluster: ha-691698
	I0729 11:37:35.888930  140955 notify.go:220] Checking for updates...
	I0729 11:37:35.889292  140955 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:35.889312  140955 status.go:255] checking status of ha-691698 ...
	I0729 11:37:35.889837  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:35.889890  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:35.910552  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38133
	I0729 11:37:35.911039  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:35.911771  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:35.911799  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:35.912338  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:35.912627  140955 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:37:35.914696  140955 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:37:35.914717  140955 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:35.915197  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:35.915256  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:35.932036  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45959
	I0729 11:37:35.932543  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:35.933166  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:35.933199  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:35.933536  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:35.933771  140955 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:37:35.937028  140955 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:35.937655  140955 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:35.937692  140955 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:35.937816  140955 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:35.938234  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:35.938280  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:35.957513  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42593
	I0729 11:37:35.958020  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:35.958616  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:35.958649  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:35.959080  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:35.959336  140955 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:37:35.959634  140955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:35.959664  140955 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:37:35.963122  140955 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:35.963650  140955 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:35.963673  140955 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:35.963876  140955 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:37:35.964060  140955 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:37:35.964256  140955 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:37:35.964425  140955 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:37:36.044296  140955 ssh_runner.go:195] Run: systemctl --version
	I0729 11:37:36.050241  140955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:36.065037  140955 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:36.065071  140955 api_server.go:166] Checking apiserver status ...
	I0729 11:37:36.065119  140955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:36.079190  140955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:37:36.089622  140955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:36.089689  140955 ssh_runner.go:195] Run: ls
	I0729 11:37:36.093627  140955 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:36.097853  140955 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:36.097881  140955 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:37:36.097895  140955 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:36.097917  140955 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:37:36.098313  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:36.098344  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:36.113921  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0729 11:37:36.114366  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:36.114930  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:36.114951  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:36.115277  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:36.115483  140955 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:37:36.117086  140955 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:37:36.117103  140955 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:36.117486  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:36.117554  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:36.132600  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33039
	I0729 11:37:36.133188  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:36.133634  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:36.133655  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:36.134072  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:36.134306  140955 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:37:36.137377  140955 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:36.137935  140955 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:36.137959  140955 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:36.138138  140955 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:36.138498  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:36.138542  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:36.154698  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0729 11:37:36.155215  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:36.155786  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:36.155814  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:36.156152  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:36.156377  140955 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:37:36.156659  140955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:36.156681  140955 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:37:36.159685  140955 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:36.160202  140955 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:36.160224  140955 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:36.160456  140955 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:37:36.160640  140955 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:37:36.160819  140955 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:37:36.160972  140955 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	W0729 11:37:37.181294  140955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:37.181381  140955 retry.go:31] will retry after 248.132805ms: dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:40.253268  140955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:40.253370  140955 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E0729 11:37:40.253388  140955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:40.253395  140955 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:37:40.253415  140955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:40.253423  140955 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:37:40.253764  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:40.253809  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:40.269426  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34935
	I0729 11:37:40.269861  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:40.270385  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:40.270405  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:40.270949  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:40.271172  140955 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:37:40.272931  140955 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:37:40.272947  140955 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:40.273347  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:40.273422  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:40.289319  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37723
	I0729 11:37:40.289812  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:40.290311  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:40.290341  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:40.290746  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:40.290983  140955 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:37:40.293991  140955 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:40.294639  140955 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:40.294664  140955 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:40.294889  140955 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:40.295201  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:40.295236  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:40.310889  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40965
	I0729 11:37:40.311433  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:40.312098  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:40.312127  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:40.312463  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:40.312698  140955 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:37:40.312939  140955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:40.312987  140955 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:37:40.315687  140955 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:40.316113  140955 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:40.316138  140955 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:40.316481  140955 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:37:40.316687  140955 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:37:40.316881  140955 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:37:40.317088  140955 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:37:40.400471  140955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:40.415524  140955 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:40.415557  140955 api_server.go:166] Checking apiserver status ...
	I0729 11:37:40.415600  140955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:40.430429  140955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:37:40.441071  140955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:40.441124  140955 ssh_runner.go:195] Run: ls
	I0729 11:37:40.445469  140955 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:40.450753  140955 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:40.450853  140955 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:37:40.450885  140955 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:40.450910  140955 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:37:40.451238  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:40.451265  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:40.468312  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45431
	I0729 11:37:40.468790  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:40.469291  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:40.469314  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:40.469706  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:40.469898  140955 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:37:40.471460  140955 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:37:40.471479  140955 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:40.471756  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:40.471778  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:40.487257  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
	I0729 11:37:40.487761  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:40.488216  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:40.488242  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:40.488538  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:40.488795  140955 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:37:40.491837  140955 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:40.492346  140955 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:40.492371  140955 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:40.492598  140955 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:40.492933  140955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:40.492976  140955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:40.509551  140955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0729 11:37:40.510016  140955 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:40.510569  140955 main.go:141] libmachine: Using API Version  1
	I0729 11:37:40.510591  140955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:40.510948  140955 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:40.511235  140955 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:37:40.511455  140955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:40.511479  140955 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:37:40.514956  140955 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:40.515430  140955 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:40.515459  140955 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:40.515662  140955 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:37:40.515900  140955 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:37:40.516094  140955 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:37:40.516277  140955 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:37:40.595945  140955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:40.610280  140955 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (3.743732877s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:43.375776  141071 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:43.375914  141071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:43.375924  141071 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:43.375928  141071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:43.376119  141071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:37:43.376326  141071 out.go:298] Setting JSON to false
	I0729 11:37:43.376355  141071 mustload.go:65] Loading cluster: ha-691698
	I0729 11:37:43.376410  141071 notify.go:220] Checking for updates...
	I0729 11:37:43.376794  141071 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:43.376813  141071 status.go:255] checking status of ha-691698 ...
	I0729 11:37:43.377261  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:43.377329  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:43.395624  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0729 11:37:43.396246  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:43.396815  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:43.396841  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:43.397219  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:43.397455  141071 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:37:43.399407  141071 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:37:43.399431  141071 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:43.399734  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:43.399789  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:43.415433  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35315
	I0729 11:37:43.415956  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:43.416478  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:43.416506  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:43.416923  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:43.417138  141071 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:37:43.420309  141071 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:43.420757  141071 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:43.420826  141071 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:43.420991  141071 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:43.421389  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:43.421432  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:43.438230  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0729 11:37:43.438686  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:43.439250  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:43.439279  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:43.439675  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:43.439902  141071 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:37:43.440163  141071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:43.440205  141071 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:37:43.443279  141071 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:43.443754  141071 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:43.443786  141071 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:43.443972  141071 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:37:43.444188  141071 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:37:43.444349  141071 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:37:43.444538  141071 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:37:43.532570  141071 ssh_runner.go:195] Run: systemctl --version
	I0729 11:37:43.539456  141071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:43.554416  141071 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:43.554448  141071 api_server.go:166] Checking apiserver status ...
	I0729 11:37:43.554487  141071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:43.568891  141071 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:37:43.578897  141071 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:43.578952  141071 ssh_runner.go:195] Run: ls
	I0729 11:37:43.583259  141071 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:43.588036  141071 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:43.588081  141071 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:37:43.588093  141071 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:43.588109  141071 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:37:43.588397  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:43.588433  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:43.604560  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0729 11:37:43.605099  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:43.605592  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:43.605615  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:43.605920  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:43.606170  141071 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:37:43.607709  141071 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:37:43.607732  141071 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:43.608105  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:43.608141  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:43.623475  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0729 11:37:43.623953  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:43.624496  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:43.624523  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:43.624852  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:43.625046  141071 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:37:43.627875  141071 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:43.628339  141071 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:43.628370  141071 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:43.628526  141071 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:43.628856  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:43.628906  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:43.644574  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
	I0729 11:37:43.645006  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:43.645474  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:43.645494  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:43.645869  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:43.646100  141071 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:37:43.646295  141071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:43.646320  141071 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:37:43.649104  141071 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:43.649526  141071 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:43.649546  141071 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:43.649715  141071 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:37:43.649884  141071 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:37:43.650012  141071 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:37:43.650163  141071 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	W0729 11:37:46.717278  141071 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:46.717373  141071 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E0729 11:37:46.717387  141071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:46.717397  141071 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:37:46.717414  141071 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:46.717421  141071 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:37:46.717754  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:46.717803  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:46.732985  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41969
	I0729 11:37:46.733477  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:46.733974  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:46.734005  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:46.734374  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:46.734591  141071 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:37:46.736118  141071 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:37:46.736137  141071 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:46.736425  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:46.736463  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:46.752273  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0729 11:37:46.752777  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:46.753350  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:46.753373  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:46.753694  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:46.753874  141071 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:37:46.756765  141071 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:46.757224  141071 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:46.757256  141071 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:46.757375  141071 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:46.757700  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:46.757751  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:46.772541  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37829
	I0729 11:37:46.773013  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:46.773525  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:46.773546  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:46.773885  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:46.774113  141071 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:37:46.774305  141071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:46.774324  141071 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:37:46.777194  141071 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:46.777651  141071 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:46.777671  141071 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:46.777837  141071 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:37:46.777992  141071 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:37:46.778130  141071 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:37:46.778268  141071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:37:46.865058  141071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:46.880389  141071 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:46.880424  141071 api_server.go:166] Checking apiserver status ...
	I0729 11:37:46.880465  141071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:46.894608  141071 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:37:46.905670  141071 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:46.905747  141071 ssh_runner.go:195] Run: ls
	I0729 11:37:46.910162  141071 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:46.915177  141071 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:46.915200  141071 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:37:46.915211  141071 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:46.915256  141071 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:37:46.915561  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:46.915594  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:46.930886  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I0729 11:37:46.931353  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:46.931815  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:46.931837  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:46.932249  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:46.932475  141071 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:37:46.934222  141071 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:37:46.934239  141071 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:46.934541  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:46.934565  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:46.949795  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0729 11:37:46.950267  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:46.950751  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:46.950771  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:46.951062  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:46.951301  141071 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:37:46.954092  141071 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:46.954516  141071 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:46.954556  141071 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:46.954694  141071 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:46.955020  141071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:46.955065  141071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:46.970037  141071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39081
	I0729 11:37:46.970473  141071 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:46.970922  141071 main.go:141] libmachine: Using API Version  1
	I0729 11:37:46.970954  141071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:46.971261  141071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:46.971484  141071 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:37:46.971661  141071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:46.971681  141071 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:37:46.974187  141071 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:46.974616  141071 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:46.974638  141071 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:46.974776  141071 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:37:46.974973  141071 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:37:46.975156  141071 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:37:46.975291  141071 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:37:47.055741  141071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:47.070009  141071 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (3.738600297s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:49.818091  141171 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:49.818373  141171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:49.818382  141171 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:49.818385  141171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:49.818564  141171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:37:49.818741  141171 out.go:298] Setting JSON to false
	I0729 11:37:49.818767  141171 mustload.go:65] Loading cluster: ha-691698
	I0729 11:37:49.818829  141171 notify.go:220] Checking for updates...
	I0729 11:37:49.819130  141171 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:49.819145  141171 status.go:255] checking status of ha-691698 ...
	I0729 11:37:49.819548  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:49.819609  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:49.836722  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40973
	I0729 11:37:49.837262  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:49.837945  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:49.837965  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:49.838285  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:49.838494  141171 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:37:49.840510  141171 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:37:49.840533  141171 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:49.840978  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:49.841033  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:49.856557  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0729 11:37:49.857068  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:49.857622  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:49.857661  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:49.858014  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:49.858214  141171 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:37:49.861155  141171 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:49.861682  141171 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:49.861716  141171 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:49.861885  141171 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:49.862305  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:49.862356  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:49.878029  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0729 11:37:49.878478  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:49.878979  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:49.879011  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:49.879356  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:49.879561  141171 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:37:49.879768  141171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:49.879795  141171 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:37:49.882785  141171 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:49.883232  141171 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:49.883266  141171 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:49.883386  141171 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:37:49.883577  141171 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:37:49.883771  141171 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:37:49.883890  141171 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:37:49.964224  141171 ssh_runner.go:195] Run: systemctl --version
	I0729 11:37:49.970143  141171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:49.984007  141171 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:49.984040  141171 api_server.go:166] Checking apiserver status ...
	I0729 11:37:49.984079  141171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:49.997635  141171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:37:50.007260  141171 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:50.007313  141171 ssh_runner.go:195] Run: ls
	I0729 11:37:50.012445  141171 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:50.016526  141171 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:50.016551  141171 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:37:50.016561  141171 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:50.016576  141171 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:37:50.016872  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:50.016894  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:50.032208  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0729 11:37:50.032737  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:50.033254  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:50.033295  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:50.033674  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:50.033871  141171 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:37:50.035410  141171 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:37:50.035427  141171 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:50.035801  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:50.035835  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:50.052363  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0729 11:37:50.052854  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:50.053382  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:50.053412  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:50.053752  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:50.053967  141171 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:37:50.057300  141171 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:50.057835  141171 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:50.057862  141171 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:50.058036  141171 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:50.058446  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:50.058482  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:50.074295  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I0729 11:37:50.074736  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:50.075323  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:50.075349  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:50.075772  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:50.075998  141171 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:37:50.076200  141171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:50.076218  141171 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:37:50.079134  141171 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:50.079732  141171 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:50.079759  141171 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:50.079937  141171 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:37:50.080093  141171 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:37:50.080258  141171 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:37:50.080416  141171 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	W0729 11:37:53.149252  141171 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:37:53.149356  141171 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E0729 11:37:53.149372  141171 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:53.149381  141171 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:37:53.149397  141171 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:37:53.149408  141171 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:37:53.149741  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:53.149787  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:53.166023  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0729 11:37:53.166527  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:53.166995  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:53.167023  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:53.167325  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:53.167531  141171 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:37:53.169321  141171 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:37:53.169344  141171 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:53.169753  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:53.169810  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:53.185120  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0729 11:37:53.185653  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:53.186184  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:53.186222  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:53.186571  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:53.186772  141171 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:37:53.189651  141171 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:53.190076  141171 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:53.190103  141171 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:53.190251  141171 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:37:53.190575  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:53.190622  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:53.207249  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44047
	I0729 11:37:53.207713  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:53.208196  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:53.208217  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:53.208529  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:53.208701  141171 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:37:53.208892  141171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:53.208916  141171 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:37:53.211940  141171 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:53.212542  141171 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:37:53.212589  141171 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:37:53.212772  141171 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:37:53.212949  141171 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:37:53.213131  141171 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:37:53.213256  141171 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:37:53.296748  141171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:53.313604  141171 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:53.313638  141171 api_server.go:166] Checking apiserver status ...
	I0729 11:37:53.313679  141171 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:53.329795  141171 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:37:53.340289  141171 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:53.340356  141171 ssh_runner.go:195] Run: ls
	I0729 11:37:53.345323  141171 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:53.351512  141171 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:53.351542  141171 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:37:53.351554  141171 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:53.351575  141171 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:37:53.351873  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:53.351922  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:53.367702  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33389
	I0729 11:37:53.368210  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:53.368743  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:53.368773  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:53.369117  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:53.369315  141171 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:37:53.370963  141171 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:37:53.370982  141171 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:53.371338  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:53.371365  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:53.386558  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38165
	I0729 11:37:53.387029  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:53.387549  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:53.387601  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:53.387890  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:53.388056  141171 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:37:53.390839  141171 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:53.391255  141171 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:53.391280  141171 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:53.391401  141171 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:37:53.391846  141171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:53.391896  141171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:53.407671  141171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44611
	I0729 11:37:53.408147  141171 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:53.408590  141171 main.go:141] libmachine: Using API Version  1
	I0729 11:37:53.408614  141171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:53.408957  141171 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:53.409237  141171 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:37:53.409454  141171 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:53.409482  141171 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:37:53.412714  141171 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:53.413164  141171 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:37:53.413198  141171 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:37:53.413340  141171 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:37:53.413525  141171 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:37:53.413689  141171 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:37:53.413858  141171 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:37:53.495744  141171 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:53.509565  141171 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (3.757415497s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:37:57.707536  141288 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:37:57.707643  141288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:57.707651  141288 out.go:304] Setting ErrFile to fd 2...
	I0729 11:37:57.707655  141288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:37:57.707833  141288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:37:57.708041  141288 out.go:298] Setting JSON to false
	I0729 11:37:57.708074  141288 mustload.go:65] Loading cluster: ha-691698
	I0729 11:37:57.708189  141288 notify.go:220] Checking for updates...
	I0729 11:37:57.708465  141288 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:37:57.708483  141288 status.go:255] checking status of ha-691698 ...
	I0729 11:37:57.708937  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:57.709021  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:57.724413  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0729 11:37:57.724916  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:57.725500  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:37:57.725525  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:57.725970  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:57.726205  141288 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:37:57.728060  141288 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:37:57.728079  141288 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:57.728386  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:57.728443  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:57.743632  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46005
	I0729 11:37:57.744172  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:57.744732  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:37:57.744751  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:57.745163  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:57.745362  141288 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:37:57.748312  141288 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:57.748749  141288 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:57.748769  141288 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:57.748957  141288 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:37:57.749323  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:57.749370  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:57.766198  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I0729 11:37:57.766708  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:57.767225  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:37:57.767255  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:57.767672  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:57.767876  141288 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:37:57.768149  141288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:57.768184  141288 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:37:57.771105  141288 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:57.771561  141288 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:37:57.771590  141288 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:37:57.771728  141288 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:37:57.771945  141288 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:37:57.772107  141288 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:37:57.772251  141288 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:37:57.856634  141288 ssh_runner.go:195] Run: systemctl --version
	I0729 11:37:57.862761  141288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:37:57.877162  141288 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:37:57.877197  141288 api_server.go:166] Checking apiserver status ...
	I0729 11:37:57.877239  141288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:37:57.890417  141288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:37:57.899376  141288 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:37:57.899438  141288 ssh_runner.go:195] Run: ls
	I0729 11:37:57.903902  141288 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:37:57.909969  141288 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:37:57.909996  141288 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:37:57.910006  141288 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:37:57.910030  141288 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:37:57.910395  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:57.910430  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:57.927493  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I0729 11:37:57.927974  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:57.928475  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:37:57.928495  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:57.928825  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:57.929077  141288 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:37:57.930789  141288 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:37:57.930807  141288 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:57.931106  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:57.931135  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:57.946979  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0729 11:37:57.947461  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:57.947904  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:37:57.947929  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:57.948234  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:57.948412  141288 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:37:57.950993  141288 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:57.951416  141288 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:57.951450  141288 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:57.951528  141288 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:37:57.951959  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:37:57.952004  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:37:57.967210  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I0729 11:37:57.967677  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:37:57.968146  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:37:57.968171  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:37:57.968503  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:37:57.968717  141288 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:37:57.968919  141288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:37:57.968941  141288 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:37:57.971862  141288 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:57.972292  141288 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:37:57.972324  141288 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:37:57.972468  141288 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:37:57.972673  141288 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:37:57.972836  141288 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:37:57.973011  141288 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	W0729 11:38:01.053257  141288 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.5:22: connect: no route to host
	W0729 11:38:01.053367  141288 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	E0729 11:38:01.053390  141288 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:38:01.053402  141288 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:38:01.053423  141288 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.5:22: connect: no route to host
	I0729 11:38:01.053430  141288 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:38:01.053752  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:01.053820  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:01.069686  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0729 11:38:01.070161  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:01.070648  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:38:01.070679  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:01.071058  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:01.071259  141288 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:38:01.072999  141288 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:38:01.073019  141288 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:38:01.073387  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:01.073453  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:01.090823  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0729 11:38:01.091375  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:01.091923  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:38:01.091951  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:01.092281  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:01.092508  141288 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:38:01.095446  141288 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:01.095886  141288 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:38:01.095915  141288 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:01.096021  141288 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:38:01.096318  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:01.096359  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:01.113846  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36035
	I0729 11:38:01.114301  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:01.114782  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:38:01.114809  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:01.115157  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:01.115409  141288 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:38:01.115648  141288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:01.115680  141288 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:38:01.118938  141288 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:01.119355  141288 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:38:01.119373  141288 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:01.119525  141288 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:38:01.119711  141288 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:38:01.119875  141288 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:38:01.119986  141288 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:38:01.203959  141288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:01.221345  141288 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:38:01.221383  141288 api_server.go:166] Checking apiserver status ...
	I0729 11:38:01.221425  141288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:38:01.236461  141288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:38:01.250209  141288 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:38:01.250267  141288 ssh_runner.go:195] Run: ls
	I0729 11:38:01.257126  141288 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:38:01.263622  141288 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:38:01.263652  141288 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:38:01.263676  141288 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:38:01.263697  141288 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:38:01.264042  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:01.264071  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:01.280234  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0729 11:38:01.280679  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:01.281169  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:38:01.281198  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:01.281543  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:01.281750  141288 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:38:01.283359  141288 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:38:01.283376  141288 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:38:01.283683  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:01.283711  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:01.298838  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0729 11:38:01.299330  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:01.299792  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:38:01.299817  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:01.300147  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:01.300311  141288 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:38:01.303319  141288 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:01.303788  141288 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:38:01.303832  141288 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:01.304008  141288 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:38:01.304308  141288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:01.304347  141288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:01.321474  141288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0729 11:38:01.321909  141288 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:01.322397  141288 main.go:141] libmachine: Using API Version  1
	I0729 11:38:01.322428  141288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:01.322794  141288 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:01.323105  141288 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:38:01.323377  141288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:01.323400  141288 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:38:01.326335  141288 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:01.326858  141288 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:38:01.326883  141288 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:01.327058  141288 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:38:01.327256  141288 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:38:01.327407  141288 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:38:01.327585  141288 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:38:01.403744  141288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:01.418130  141288 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 7 (613.525655ms)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:38:06.725792  141408 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:38:06.725920  141408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:38:06.725928  141408 out.go:304] Setting ErrFile to fd 2...
	I0729 11:38:06.725932  141408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:38:06.726102  141408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:38:06.726290  141408 out.go:298] Setting JSON to false
	I0729 11:38:06.726323  141408 mustload.go:65] Loading cluster: ha-691698
	I0729 11:38:06.726372  141408 notify.go:220] Checking for updates...
	I0729 11:38:06.726701  141408 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:38:06.726719  141408 status.go:255] checking status of ha-691698 ...
	I0729 11:38:06.727163  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.727232  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.743193  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0729 11:38:06.743682  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.744346  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.744369  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.744878  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.745153  141408 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:38:06.747108  141408 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:38:06.747128  141408 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:38:06.747585  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.747643  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.763763  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0729 11:38:06.764176  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.764677  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.764728  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.765066  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.765267  141408 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:38:06.768106  141408 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:06.768605  141408 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:38:06.768631  141408 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:06.768853  141408 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:38:06.769257  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.769315  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.785268  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44997
	I0729 11:38:06.785714  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.786316  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.786339  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.786657  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.786847  141408 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:38:06.787043  141408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:06.787081  141408 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:38:06.790392  141408 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:06.790878  141408 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:38:06.790907  141408 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:06.791083  141408 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:38:06.791272  141408 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:38:06.791461  141408 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:38:06.791611  141408 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:38:06.872367  141408 ssh_runner.go:195] Run: systemctl --version
	I0729 11:38:06.878606  141408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:06.892389  141408 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:38:06.892423  141408 api_server.go:166] Checking apiserver status ...
	I0729 11:38:06.892462  141408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:38:06.905771  141408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:38:06.914898  141408 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:38:06.914982  141408 ssh_runner.go:195] Run: ls
	I0729 11:38:06.919259  141408 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:38:06.923503  141408 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:38:06.923532  141408 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:38:06.923545  141408 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:38:06.923566  141408 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:38:06.923872  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.923900  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.938967  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0729 11:38:06.939436  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.939952  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.939973  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.940336  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.940534  141408 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:38:06.942236  141408 status.go:330] ha-691698-m02 host status = "Stopped" (err=<nil>)
	I0729 11:38:06.942251  141408 status.go:343] host is not running, skipping remaining checks
	I0729 11:38:06.942260  141408 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:38:06.942281  141408 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:38:06.942600  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.942630  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.958280  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0729 11:38:06.958770  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.959257  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.959282  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.959673  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.959877  141408 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:38:06.961523  141408 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:38:06.961544  141408 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:38:06.961936  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.961981  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.977471  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0729 11:38:06.977931  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.978529  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.978556  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.978926  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.979096  141408 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:38:06.982022  141408 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:06.982450  141408 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:38:06.982486  141408 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:06.982652  141408 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:38:06.982969  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:06.983018  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:06.998117  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0729 11:38:06.998622  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:06.999114  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:06.999140  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:06.999473  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:06.999747  141408 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:38:06.999932  141408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:06.999956  141408 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:38:07.002692  141408 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:07.003112  141408 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:38:07.003132  141408 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:07.003322  141408 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:38:07.003506  141408 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:38:07.003695  141408 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:38:07.003821  141408 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:38:07.084510  141408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:07.099550  141408 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:38:07.099578  141408 api_server.go:166] Checking apiserver status ...
	I0729 11:38:07.099615  141408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:38:07.113892  141408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:38:07.123796  141408 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:38:07.123862  141408 ssh_runner.go:195] Run: ls
	I0729 11:38:07.128676  141408 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:38:07.133122  141408 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:38:07.133149  141408 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:38:07.133158  141408 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:38:07.133171  141408 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:38:07.133510  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:07.133535  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:07.149260  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0729 11:38:07.149753  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:07.150297  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:07.150312  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:07.150668  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:07.150862  141408 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:38:07.152342  141408 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:38:07.152361  141408 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:38:07.152756  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:07.152828  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:07.169361  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42459
	I0729 11:38:07.169838  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:07.170342  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:07.170369  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:07.170810  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:07.171031  141408 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:38:07.173863  141408 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:07.174285  141408 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:38:07.174315  141408 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:07.174509  141408 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:38:07.174803  141408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:07.174850  141408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:07.190911  141408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0729 11:38:07.191385  141408 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:07.191878  141408 main.go:141] libmachine: Using API Version  1
	I0729 11:38:07.191903  141408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:07.192227  141408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:07.192457  141408 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:38:07.192662  141408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:07.192683  141408 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:38:07.195413  141408 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:07.195776  141408 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:38:07.195799  141408 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:07.195954  141408 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:38:07.196188  141408 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:38:07.196360  141408 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:38:07.196558  141408 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:38:07.279801  141408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:07.293105  141408 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 7 (620.157627ms)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-691698-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:38:19.618110  141530 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:38:19.618227  141530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:38:19.618234  141530 out.go:304] Setting ErrFile to fd 2...
	I0729 11:38:19.618238  141530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:38:19.618418  141530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:38:19.618586  141530 out.go:298] Setting JSON to false
	I0729 11:38:19.618614  141530 mustload.go:65] Loading cluster: ha-691698
	I0729 11:38:19.618653  141530 notify.go:220] Checking for updates...
	I0729 11:38:19.619090  141530 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:38:19.619111  141530 status.go:255] checking status of ha-691698 ...
	I0729 11:38:19.619539  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:19.619614  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:19.635906  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I0729 11:38:19.636406  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:19.637082  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:19.637113  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:19.637553  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:19.637816  141530 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:38:19.639451  141530 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:38:19.639480  141530 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:38:19.639764  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:19.639800  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:19.656294  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0729 11:38:19.656785  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:19.657388  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:19.657418  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:19.657789  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:19.658002  141530 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:38:19.661112  141530 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:19.661653  141530 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:38:19.661687  141530 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:19.661786  141530 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:38:19.662201  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:19.662253  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:19.678578  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0729 11:38:19.679036  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:19.679683  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:19.679710  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:19.680032  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:19.680253  141530 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:38:19.680481  141530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:19.680525  141530 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:38:19.683402  141530 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:19.683866  141530 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:38:19.683943  141530 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:38:19.684237  141530 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:38:19.684503  141530 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:38:19.684678  141530 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:38:19.684834  141530 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:38:19.768042  141530 ssh_runner.go:195] Run: systemctl --version
	I0729 11:38:19.774230  141530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:19.790629  141530 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:38:19.790666  141530 api_server.go:166] Checking apiserver status ...
	I0729 11:38:19.790726  141530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:38:19.808016  141530 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W0729 11:38:19.818069  141530 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:38:19.818140  141530 ssh_runner.go:195] Run: ls
	I0729 11:38:19.822533  141530 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:38:19.829654  141530 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:38:19.829689  141530 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:38:19.829704  141530 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:38:19.829727  141530 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:38:19.830113  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:19.830169  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:19.845130  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46017
	I0729 11:38:19.845617  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:19.846137  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:19.846157  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:19.846463  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:19.846655  141530 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:38:19.848089  141530 status.go:330] ha-691698-m02 host status = "Stopped" (err=<nil>)
	I0729 11:38:19.848105  141530 status.go:343] host is not running, skipping remaining checks
	I0729 11:38:19.848113  141530 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:38:19.848134  141530 status.go:255] checking status of ha-691698-m03 ...
	I0729 11:38:19.848460  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:19.848490  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:19.863592  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0729 11:38:19.864050  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:19.864547  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:19.864567  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:19.864886  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:19.865130  141530 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:38:19.866721  141530 status.go:330] ha-691698-m03 host status = "Running" (err=<nil>)
	I0729 11:38:19.866743  141530 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:38:19.867028  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:19.867062  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:19.882541  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44353
	I0729 11:38:19.883020  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:19.883451  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:19.883475  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:19.883798  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:19.884001  141530 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:38:19.886645  141530 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:19.887114  141530 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:38:19.887137  141530 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:19.887345  141530 host.go:66] Checking if "ha-691698-m03" exists ...
	I0729 11:38:19.887664  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:19.887707  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:19.903574  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0729 11:38:19.903992  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:19.904443  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:19.904472  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:19.904776  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:19.904997  141530 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:38:19.905177  141530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:19.905198  141530 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:38:19.907888  141530 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:19.908329  141530 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:38:19.908353  141530 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:19.908521  141530 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:38:19.908714  141530 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:38:19.908887  141530 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:38:19.909043  141530 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:38:19.988090  141530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:20.003241  141530 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:38:20.003273  141530 api_server.go:166] Checking apiserver status ...
	I0729 11:38:20.003303  141530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:38:20.015981  141530 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup
	W0729 11:38:20.025362  141530 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1511/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:38:20.025457  141530 ssh_runner.go:195] Run: ls
	I0729 11:38:20.029963  141530 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:38:20.036308  141530 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:38:20.036351  141530 status.go:422] ha-691698-m03 apiserver status = Running (err=<nil>)
	I0729 11:38:20.036365  141530 status.go:257] ha-691698-m03 status: &{Name:ha-691698-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:38:20.036391  141530 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:38:20.036824  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:20.036871  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:20.052718  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0729 11:38:20.053207  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:20.053671  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:20.053697  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:20.054060  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:20.054257  141530 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:38:20.055715  141530 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:38:20.055736  141530 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:38:20.056181  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:20.056235  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:20.071277  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I0729 11:38:20.071710  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:20.072190  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:20.072213  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:20.072521  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:20.072701  141530 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:38:20.075933  141530 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:20.076415  141530 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:38:20.076446  141530 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:20.076599  141530 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:38:20.076905  141530 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:20.076936  141530 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:20.092866  141530 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46247
	I0729 11:38:20.093310  141530 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:20.093824  141530 main.go:141] libmachine: Using API Version  1
	I0729 11:38:20.093846  141530 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:20.094202  141530 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:20.094423  141530 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:38:20.094647  141530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:38:20.094667  141530 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:38:20.097828  141530 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:20.098262  141530 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:38:20.098285  141530 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:20.098495  141530 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:38:20.098650  141530 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:38:20.098807  141530 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:38:20.098925  141530 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:38:20.175854  141530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:38:20.189181  141530 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-691698 -n ha-691698
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-691698 logs -n 25: (1.356072676s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698:/home/docker/cp-test_ha-691698-m03_ha-691698.txt                       |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698 sudo cat                                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698.txt                                 |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m02:/home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m04 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp testdata/cp-test.txt                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698:/home/docker/cp-test_ha-691698-m04_ha-691698.txt                       |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698 sudo cat                                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698.txt                                 |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m02:/home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03:/home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m03 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-691698 node stop m02 -v=7                                                     | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-691698 node start m02 -v=7                                                    | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:30:19
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:30:19.109800  135944 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:30:19.109894  135944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:30:19.109901  135944 out.go:304] Setting ErrFile to fd 2...
	I0729 11:30:19.109905  135944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:30:19.110113  135944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:30:19.110673  135944 out.go:298] Setting JSON to false
	I0729 11:30:19.111583  135944 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4370,"bootTime":1722248249,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:30:19.111641  135944 start.go:139] virtualization: kvm guest
	I0729 11:30:19.113602  135944 out.go:177] * [ha-691698] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:30:19.114844  135944 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 11:30:19.114889  135944 notify.go:220] Checking for updates...
	I0729 11:30:19.117179  135944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:30:19.118330  135944 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:30:19.119421  135944 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:30:19.120555  135944 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:30:19.121649  135944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:30:19.122987  135944 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:30:19.159520  135944 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 11:30:19.160871  135944 start.go:297] selected driver: kvm2
	I0729 11:30:19.161000  135944 start.go:901] validating driver "kvm2" against <nil>
	I0729 11:30:19.161040  135944 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:30:19.162553  135944 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:30:19.162633  135944 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:30:19.178223  135944 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:30:19.178282  135944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:30:19.178474  135944 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:30:19.178517  135944 cni.go:84] Creating CNI manager for ""
	I0729 11:30:19.178537  135944 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 11:30:19.178549  135944 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 11:30:19.178615  135944 start.go:340] cluster config:
	{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 11:30:19.178704  135944 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:30:19.180298  135944 out.go:177] * Starting "ha-691698" primary control-plane node in "ha-691698" cluster
	I0729 11:30:19.181407  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:30:19.181437  135944 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:30:19.181444  135944 cache.go:56] Caching tarball of preloaded images
	I0729 11:30:19.181516  135944 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:30:19.181530  135944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:30:19.181817  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:30:19.181839  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json: {Name:mke678dd073965d3ae53a18897ada1c5c7139621 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:19.181964  135944 start.go:360] acquireMachinesLock for ha-691698: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:30:19.181991  135944 start.go:364] duration metric: took 15.311µs to acquireMachinesLock for "ha-691698"
	I0729 11:30:19.182006  135944 start.go:93] Provisioning new machine with config: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:30:19.182060  135944 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 11:30:19.183523  135944 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:30:19.183631  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:30:19.183663  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:30:19.199214  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45541
	I0729 11:30:19.199720  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:30:19.200218  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:30:19.200240  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:30:19.200647  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:30:19.200816  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:19.200988  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:19.201124  135944 start.go:159] libmachine.API.Create for "ha-691698" (driver="kvm2")
	I0729 11:30:19.201153  135944 client.go:168] LocalClient.Create starting
	I0729 11:30:19.201190  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 11:30:19.201222  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:30:19.201235  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:30:19.201296  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 11:30:19.201316  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:30:19.201328  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:30:19.201343  135944 main.go:141] libmachine: Running pre-create checks...
	I0729 11:30:19.201352  135944 main.go:141] libmachine: (ha-691698) Calling .PreCreateCheck
	I0729 11:30:19.201686  135944 main.go:141] libmachine: (ha-691698) Calling .GetConfigRaw
	I0729 11:30:19.202018  135944 main.go:141] libmachine: Creating machine...
	I0729 11:30:19.202033  135944 main.go:141] libmachine: (ha-691698) Calling .Create
	I0729 11:30:19.202161  135944 main.go:141] libmachine: (ha-691698) Creating KVM machine...
	I0729 11:30:19.203682  135944 main.go:141] libmachine: (ha-691698) DBG | found existing default KVM network
	I0729 11:30:19.204680  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.204521  135967 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015470}
	I0729 11:30:19.204747  135944 main.go:141] libmachine: (ha-691698) DBG | created network xml: 
	I0729 11:30:19.204766  135944 main.go:141] libmachine: (ha-691698) DBG | <network>
	I0729 11:30:19.204776  135944 main.go:141] libmachine: (ha-691698) DBG |   <name>mk-ha-691698</name>
	I0729 11:30:19.204786  135944 main.go:141] libmachine: (ha-691698) DBG |   <dns enable='no'/>
	I0729 11:30:19.204797  135944 main.go:141] libmachine: (ha-691698) DBG |   
	I0729 11:30:19.204808  135944 main.go:141] libmachine: (ha-691698) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 11:30:19.204817  135944 main.go:141] libmachine: (ha-691698) DBG |     <dhcp>
	I0729 11:30:19.204828  135944 main.go:141] libmachine: (ha-691698) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 11:30:19.204861  135944 main.go:141] libmachine: (ha-691698) DBG |     </dhcp>
	I0729 11:30:19.204885  135944 main.go:141] libmachine: (ha-691698) DBG |   </ip>
	I0729 11:30:19.204897  135944 main.go:141] libmachine: (ha-691698) DBG |   
	I0729 11:30:19.204908  135944 main.go:141] libmachine: (ha-691698) DBG | </network>
	I0729 11:30:19.204934  135944 main.go:141] libmachine: (ha-691698) DBG | 
	I0729 11:30:19.209956  135944 main.go:141] libmachine: (ha-691698) DBG | trying to create private KVM network mk-ha-691698 192.168.39.0/24...
	I0729 11:30:19.275395  135944 main.go:141] libmachine: (ha-691698) DBG | private KVM network mk-ha-691698 192.168.39.0/24 created
	I0729 11:30:19.275428  135944 main.go:141] libmachine: (ha-691698) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698 ...
	I0729 11:30:19.275444  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.275349  135967 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:30:19.275461  135944 main.go:141] libmachine: (ha-691698) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:30:19.275535  135944 main.go:141] libmachine: (ha-691698) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:30:19.538171  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.538045  135967 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa...
	I0729 11:30:19.642797  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.642625  135967 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/ha-691698.rawdisk...
	I0729 11:30:19.642870  135944 main.go:141] libmachine: (ha-691698) DBG | Writing magic tar header
	I0729 11:30:19.642889  135944 main.go:141] libmachine: (ha-691698) DBG | Writing SSH key tar header
	I0729 11:30:19.642902  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:19.642784  135967 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698 ...
	I0729 11:30:19.642916  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698
	I0729 11:30:19.642979  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698 (perms=drwx------)
	I0729 11:30:19.643005  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:30:19.643012  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 11:30:19.643035  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:30:19.643044  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 11:30:19.643051  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 11:30:19.643058  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 11:30:19.643067  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:30:19.643074  135944 main.go:141] libmachine: (ha-691698) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:30:19.643078  135944 main.go:141] libmachine: (ha-691698) Creating domain...
	I0729 11:30:19.643087  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:30:19.643092  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:30:19.643119  135944 main.go:141] libmachine: (ha-691698) DBG | Checking permissions on dir: /home
	I0729 11:30:19.643143  135944 main.go:141] libmachine: (ha-691698) DBG | Skipping /home - not owner
	I0729 11:30:19.644234  135944 main.go:141] libmachine: (ha-691698) define libvirt domain using xml: 
	I0729 11:30:19.644258  135944 main.go:141] libmachine: (ha-691698) <domain type='kvm'>
	I0729 11:30:19.644265  135944 main.go:141] libmachine: (ha-691698)   <name>ha-691698</name>
	I0729 11:30:19.644270  135944 main.go:141] libmachine: (ha-691698)   <memory unit='MiB'>2200</memory>
	I0729 11:30:19.644275  135944 main.go:141] libmachine: (ha-691698)   <vcpu>2</vcpu>
	I0729 11:30:19.644279  135944 main.go:141] libmachine: (ha-691698)   <features>
	I0729 11:30:19.644284  135944 main.go:141] libmachine: (ha-691698)     <acpi/>
	I0729 11:30:19.644288  135944 main.go:141] libmachine: (ha-691698)     <apic/>
	I0729 11:30:19.644292  135944 main.go:141] libmachine: (ha-691698)     <pae/>
	I0729 11:30:19.644299  135944 main.go:141] libmachine: (ha-691698)     
	I0729 11:30:19.644304  135944 main.go:141] libmachine: (ha-691698)   </features>
	I0729 11:30:19.644309  135944 main.go:141] libmachine: (ha-691698)   <cpu mode='host-passthrough'>
	I0729 11:30:19.644313  135944 main.go:141] libmachine: (ha-691698)   
	I0729 11:30:19.644321  135944 main.go:141] libmachine: (ha-691698)   </cpu>
	I0729 11:30:19.644326  135944 main.go:141] libmachine: (ha-691698)   <os>
	I0729 11:30:19.644334  135944 main.go:141] libmachine: (ha-691698)     <type>hvm</type>
	I0729 11:30:19.644362  135944 main.go:141] libmachine: (ha-691698)     <boot dev='cdrom'/>
	I0729 11:30:19.644384  135944 main.go:141] libmachine: (ha-691698)     <boot dev='hd'/>
	I0729 11:30:19.644405  135944 main.go:141] libmachine: (ha-691698)     <bootmenu enable='no'/>
	I0729 11:30:19.644415  135944 main.go:141] libmachine: (ha-691698)   </os>
	I0729 11:30:19.644424  135944 main.go:141] libmachine: (ha-691698)   <devices>
	I0729 11:30:19.644432  135944 main.go:141] libmachine: (ha-691698)     <disk type='file' device='cdrom'>
	I0729 11:30:19.644446  135944 main.go:141] libmachine: (ha-691698)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/boot2docker.iso'/>
	I0729 11:30:19.644488  135944 main.go:141] libmachine: (ha-691698)       <target dev='hdc' bus='scsi'/>
	I0729 11:30:19.644503  135944 main.go:141] libmachine: (ha-691698)       <readonly/>
	I0729 11:30:19.644513  135944 main.go:141] libmachine: (ha-691698)     </disk>
	I0729 11:30:19.644524  135944 main.go:141] libmachine: (ha-691698)     <disk type='file' device='disk'>
	I0729 11:30:19.644537  135944 main.go:141] libmachine: (ha-691698)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:30:19.644557  135944 main.go:141] libmachine: (ha-691698)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/ha-691698.rawdisk'/>
	I0729 11:30:19.644572  135944 main.go:141] libmachine: (ha-691698)       <target dev='hda' bus='virtio'/>
	I0729 11:30:19.644580  135944 main.go:141] libmachine: (ha-691698)     </disk>
	I0729 11:30:19.644586  135944 main.go:141] libmachine: (ha-691698)     <interface type='network'>
	I0729 11:30:19.644593  135944 main.go:141] libmachine: (ha-691698)       <source network='mk-ha-691698'/>
	I0729 11:30:19.644598  135944 main.go:141] libmachine: (ha-691698)       <model type='virtio'/>
	I0729 11:30:19.644605  135944 main.go:141] libmachine: (ha-691698)     </interface>
	I0729 11:30:19.644611  135944 main.go:141] libmachine: (ha-691698)     <interface type='network'>
	I0729 11:30:19.644619  135944 main.go:141] libmachine: (ha-691698)       <source network='default'/>
	I0729 11:30:19.644624  135944 main.go:141] libmachine: (ha-691698)       <model type='virtio'/>
	I0729 11:30:19.644631  135944 main.go:141] libmachine: (ha-691698)     </interface>
	I0729 11:30:19.644636  135944 main.go:141] libmachine: (ha-691698)     <serial type='pty'>
	I0729 11:30:19.644643  135944 main.go:141] libmachine: (ha-691698)       <target port='0'/>
	I0729 11:30:19.644659  135944 main.go:141] libmachine: (ha-691698)     </serial>
	I0729 11:30:19.644677  135944 main.go:141] libmachine: (ha-691698)     <console type='pty'>
	I0729 11:30:19.644686  135944 main.go:141] libmachine: (ha-691698)       <target type='serial' port='0'/>
	I0729 11:30:19.644695  135944 main.go:141] libmachine: (ha-691698)     </console>
	I0729 11:30:19.644705  135944 main.go:141] libmachine: (ha-691698)     <rng model='virtio'>
	I0729 11:30:19.644714  135944 main.go:141] libmachine: (ha-691698)       <backend model='random'>/dev/random</backend>
	I0729 11:30:19.644726  135944 main.go:141] libmachine: (ha-691698)     </rng>
	I0729 11:30:19.644732  135944 main.go:141] libmachine: (ha-691698)     
	I0729 11:30:19.644743  135944 main.go:141] libmachine: (ha-691698)     
	I0729 11:30:19.644755  135944 main.go:141] libmachine: (ha-691698)   </devices>
	I0729 11:30:19.644765  135944 main.go:141] libmachine: (ha-691698) </domain>
	I0729 11:30:19.644771  135944 main.go:141] libmachine: (ha-691698) 
	I0729 11:30:19.649272  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:4c:d4:11 in network default
	I0729 11:30:19.649774  135944 main.go:141] libmachine: (ha-691698) Ensuring networks are active...
	I0729 11:30:19.649800  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:19.650476  135944 main.go:141] libmachine: (ha-691698) Ensuring network default is active
	I0729 11:30:19.650798  135944 main.go:141] libmachine: (ha-691698) Ensuring network mk-ha-691698 is active
	I0729 11:30:19.651309  135944 main.go:141] libmachine: (ha-691698) Getting domain xml...
	I0729 11:30:19.652086  135944 main.go:141] libmachine: (ha-691698) Creating domain...
	I0729 11:30:20.834386  135944 main.go:141] libmachine: (ha-691698) Waiting to get IP...
	I0729 11:30:20.835226  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:20.835610  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:20.835647  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:20.835592  135967 retry.go:31] will retry after 205.264513ms: waiting for machine to come up
	I0729 11:30:21.042069  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:21.042506  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:21.042531  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:21.042453  135967 retry.go:31] will retry after 253.112411ms: waiting for machine to come up
	I0729 11:30:21.297002  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:21.297371  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:21.297394  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:21.297339  135967 retry.go:31] will retry after 400.644185ms: waiting for machine to come up
	I0729 11:30:21.700028  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:21.700502  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:21.700535  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:21.700475  135967 retry.go:31] will retry after 408.754818ms: waiting for machine to come up
	I0729 11:30:22.111106  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:22.111519  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:22.111543  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:22.111433  135967 retry.go:31] will retry after 617.303625ms: waiting for machine to come up
	I0729 11:30:22.730373  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:22.730885  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:22.730911  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:22.730837  135967 retry.go:31] will retry after 832.743886ms: waiting for machine to come up
	I0729 11:30:23.564805  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:23.565227  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:23.565262  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:23.565165  135967 retry.go:31] will retry after 1.027807046s: waiting for machine to come up
	I0729 11:30:24.594076  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:24.594639  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:24.594681  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:24.594459  135967 retry.go:31] will retry after 1.23332671s: waiting for machine to come up
	I0729 11:30:25.830076  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:25.830500  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:25.830958  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:25.830460  135967 retry.go:31] will retry after 1.283922101s: waiting for machine to come up
	I0729 11:30:27.115966  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:27.116244  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:27.116263  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:27.116221  135967 retry.go:31] will retry after 2.291871554s: waiting for machine to come up
	I0729 11:30:29.410192  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:29.410659  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:29.410693  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:29.410600  135967 retry.go:31] will retry after 1.85080417s: waiting for machine to come up
	I0729 11:30:31.263175  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:31.263489  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:31.263503  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:31.263463  135967 retry.go:31] will retry after 3.371378134s: waiting for machine to come up
	I0729 11:30:34.636642  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:34.637032  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:34.637054  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:34.636988  135967 retry.go:31] will retry after 2.996860971s: waiting for machine to come up
	I0729 11:30:37.637160  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:37.637558  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find current IP address of domain ha-691698 in network mk-ha-691698
	I0729 11:30:37.637585  135944 main.go:141] libmachine: (ha-691698) DBG | I0729 11:30:37.637511  135967 retry.go:31] will retry after 5.400226697s: waiting for machine to come up
	I0729 11:30:43.041917  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.042434  135944 main.go:141] libmachine: (ha-691698) Found IP for machine: 192.168.39.244
	I0729 11:30:43.042453  135944 main.go:141] libmachine: (ha-691698) Reserving static IP address...
	I0729 11:30:43.042466  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has current primary IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.042865  135944 main.go:141] libmachine: (ha-691698) DBG | unable to find host DHCP lease matching {name: "ha-691698", mac: "52:54:00:5a:22:44", ip: "192.168.39.244"} in network mk-ha-691698
	I0729 11:30:43.117601  135944 main.go:141] libmachine: (ha-691698) DBG | Getting to WaitForSSH function...
	I0729 11:30:43.117625  135944 main.go:141] libmachine: (ha-691698) Reserved static IP address: 192.168.39.244
	I0729 11:30:43.117637  135944 main.go:141] libmachine: (ha-691698) Waiting for SSH to be available...
	I0729 11:30:43.119952  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.120289  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.120317  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.120460  135944 main.go:141] libmachine: (ha-691698) DBG | Using SSH client type: external
	I0729 11:30:43.120490  135944 main.go:141] libmachine: (ha-691698) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa (-rw-------)
	I0729 11:30:43.120525  135944 main.go:141] libmachine: (ha-691698) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:30:43.120538  135944 main.go:141] libmachine: (ha-691698) DBG | About to run SSH command:
	I0729 11:30:43.120554  135944 main.go:141] libmachine: (ha-691698) DBG | exit 0
	I0729 11:30:43.245070  135944 main.go:141] libmachine: (ha-691698) DBG | SSH cmd err, output: <nil>: 
	I0729 11:30:43.245285  135944 main.go:141] libmachine: (ha-691698) KVM machine creation complete!
	I0729 11:30:43.245628  135944 main.go:141] libmachine: (ha-691698) Calling .GetConfigRaw
	I0729 11:30:43.246179  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:43.246402  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:43.246563  135944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:30:43.246580  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:30:43.247698  135944 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:30:43.247719  135944 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:30:43.247724  135944 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:30:43.247732  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.249777  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.250122  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.250148  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.250294  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.250464  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.250656  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.250832  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.250996  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.251194  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.251206  135944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:30:43.356685  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:30:43.356710  135944 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:30:43.356717  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.359166  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.359574  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.359604  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.359784  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.360025  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.360200  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.360361  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.360556  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.360749  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.360763  135944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:30:43.465529  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:30:43.465602  135944 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:30:43.465612  135944 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:30:43.465620  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:43.465861  135944 buildroot.go:166] provisioning hostname "ha-691698"
	I0729 11:30:43.465889  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:43.466133  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.468694  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.469086  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.469113  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.469357  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.469551  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.469697  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.469846  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.470055  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.470236  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.470258  135944 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698 && echo "ha-691698" | sudo tee /etc/hostname
	I0729 11:30:43.590126  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698
	
	I0729 11:30:43.590156  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.592840  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.593214  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.593246  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.593438  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:43.593685  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.593933  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:43.594075  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:43.594227  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:43.594403  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:43.594419  135944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:30:43.709450  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:30:43.709482  135944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:30:43.709516  135944 buildroot.go:174] setting up certificates
	I0729 11:30:43.709527  135944 provision.go:84] configureAuth start
	I0729 11:30:43.709536  135944 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:30:43.709856  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:43.712024  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.712317  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.712342  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.712503  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:43.714443  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.714747  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:43.714774  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:43.714860  135944 provision.go:143] copyHostCerts
	I0729 11:30:43.714892  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:30:43.714934  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:30:43.714947  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:30:43.715010  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:30:43.715088  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:30:43.715105  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:30:43.715112  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:30:43.715135  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:30:43.715176  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:30:43.715192  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:30:43.715198  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:30:43.715217  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:30:43.715264  135944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698 san=[127.0.0.1 192.168.39.244 ha-691698 localhost minikube]
	I0729 11:30:44.206895  135944 provision.go:177] copyRemoteCerts
	I0729 11:30:44.206978  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:30:44.207009  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.209485  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.209789  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.209819  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.209977  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.210166  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.210336  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.210482  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.295084  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:30:44.295158  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:30:44.318943  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:30:44.319024  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 11:30:44.342695  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:30:44.342759  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:30:44.366483  135944 provision.go:87] duration metric: took 656.942521ms to configureAuth
	I0729 11:30:44.366514  135944 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:30:44.366706  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:30:44.366799  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.369558  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.369883  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.369920  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.370075  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.370283  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.370468  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.370630  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.370834  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:44.371030  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:44.371054  135944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:30:44.631402  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:30:44.631432  135944 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:30:44.631442  135944 main.go:141] libmachine: (ha-691698) Calling .GetURL
	I0729 11:30:44.632733  135944 main.go:141] libmachine: (ha-691698) DBG | Using libvirt version 6000000
	I0729 11:30:44.634693  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.635028  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.635049  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.635209  135944 main.go:141] libmachine: Docker is up and running!
	I0729 11:30:44.635221  135944 main.go:141] libmachine: Reticulating splines...
	I0729 11:30:44.635228  135944 client.go:171] duration metric: took 25.434064651s to LocalClient.Create
	I0729 11:30:44.635250  135944 start.go:167] duration metric: took 25.434127501s to libmachine.API.Create "ha-691698"
	I0729 11:30:44.635263  135944 start.go:293] postStartSetup for "ha-691698" (driver="kvm2")
	I0729 11:30:44.635278  135944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:30:44.635300  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.635562  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:30:44.635589  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.637700  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.637980  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.638007  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.638116  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.638300  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.638437  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.638591  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.719027  135944 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:30:44.723021  135944 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:30:44.723041  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:30:44.723103  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:30:44.723170  135944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:30:44.723180  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:30:44.723265  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:30:44.732319  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:30:44.754767  135944 start.go:296] duration metric: took 119.486239ms for postStartSetup
	I0729 11:30:44.754833  135944 main.go:141] libmachine: (ha-691698) Calling .GetConfigRaw
	I0729 11:30:44.755410  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:44.757916  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.758278  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.758305  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.758543  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:30:44.758711  135944 start.go:128] duration metric: took 25.576642337s to createHost
	I0729 11:30:44.758734  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.761016  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.761324  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.761348  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.761514  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.761717  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.761854  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.761998  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.762155  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:30:44.762328  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:30:44.762343  135944 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:30:44.865338  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252644.846414629
	
	I0729 11:30:44.865366  135944 fix.go:216] guest clock: 1722252644.846414629
	I0729 11:30:44.865374  135944 fix.go:229] Guest: 2024-07-29 11:30:44.846414629 +0000 UTC Remote: 2024-07-29 11:30:44.758721994 +0000 UTC m=+25.684891071 (delta=87.692635ms)
	I0729 11:30:44.865394  135944 fix.go:200] guest clock delta is within tolerance: 87.692635ms
	I0729 11:30:44.865399  135944 start.go:83] releasing machines lock for "ha-691698", held for 25.683399876s
	I0729 11:30:44.865420  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.865687  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:44.867966  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.868284  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.868310  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.868444  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.868991  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.869192  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:30:44.869282  135944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:30:44.869337  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.869449  135944 ssh_runner.go:195] Run: cat /version.json
	I0729 11:30:44.869478  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:30:44.871631  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.871916  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.871938  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.872048  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.872053  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.872280  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.872356  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:44.872376  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:44.872458  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.872530  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:30:44.872597  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.872666  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:30:44.872767  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:30:44.872907  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:30:44.949388  135944 ssh_runner.go:195] Run: systemctl --version
	I0729 11:30:44.967968  135944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:30:45.123809  135944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:30:45.129586  135944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:30:45.129645  135944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:30:45.144349  135944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:30:45.144372  135944 start.go:495] detecting cgroup driver to use...
	I0729 11:30:45.144430  135944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:30:45.160322  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:30:45.172763  135944 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:30:45.172828  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:30:45.185920  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:30:45.198742  135944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:30:45.307520  135944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:30:45.442527  135944 docker.go:233] disabling docker service ...
	I0729 11:30:45.442608  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:30:45.455901  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:30:45.468348  135944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:30:45.598540  135944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:30:45.733852  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:30:45.746953  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:30:45.764765  135944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:30:45.764843  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.774761  135944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:30:45.774850  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.784805  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.794540  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.804447  135944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:30:45.814694  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.824193  135944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.840107  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:30:45.850356  135944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:30:45.859239  135944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:30:45.859316  135944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:30:45.871650  135944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:30:45.880929  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:30:46.000530  135944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:30:46.135898  135944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:30:46.135999  135944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:30:46.140584  135944 start.go:563] Will wait 60s for crictl version
	I0729 11:30:46.140650  135944 ssh_runner.go:195] Run: which crictl
	I0729 11:30:46.144122  135944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:30:46.178250  135944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:30:46.178344  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:30:46.204928  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:30:46.233564  135944 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:30:46.234884  135944 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:30:46.237323  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:46.237662  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:30:46.237690  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:30:46.237879  135944 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:30:46.241677  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:30:46.253621  135944 kubeadm.go:883] updating cluster {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:30:46.253734  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:30:46.253779  135944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:30:46.285442  135944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 11:30:46.285512  135944 ssh_runner.go:195] Run: which lz4
	I0729 11:30:46.289139  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 11:30:46.289258  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 11:30:46.293147  135944 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 11:30:46.293189  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 11:30:47.573761  135944 crio.go:462] duration metric: took 1.284535323s to copy over tarball
	I0729 11:30:47.573839  135944 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 11:30:49.730447  135944 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.15657971s)
	I0729 11:30:49.730473  135944 crio.go:469] duration metric: took 2.156679938s to extract the tarball
	I0729 11:30:49.730481  135944 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 11:30:49.767686  135944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:30:49.809380  135944 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:30:49.809402  135944 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:30:49.809410  135944 kubeadm.go:934] updating node { 192.168.39.244 8443 v1.30.3 crio true true} ...
	I0729 11:30:49.809520  135944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:30:49.809607  135944 ssh_runner.go:195] Run: crio config
	I0729 11:30:49.854193  135944 cni.go:84] Creating CNI manager for ""
	I0729 11:30:49.854217  135944 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 11:30:49.854229  135944 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:30:49.854254  135944 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-691698 NodeName:ha-691698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:30:49.854416  135944 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-691698"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:30:49.854443  135944 kube-vip.go:115] generating kube-vip config ...
	I0729 11:30:49.854497  135944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:30:49.871563  135944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:30:49.871680  135944 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:30:49.871738  135944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:30:49.883595  135944 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:30:49.883669  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 11:30:49.895077  135944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 11:30:49.910842  135944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:30:49.926669  135944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 11:30:49.942257  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 11:30:49.958201  135944 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:30:49.961844  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:30:49.973747  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:30:50.105989  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:30:50.122400  135944 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.244
	I0729 11:30:50.122424  135944 certs.go:194] generating shared ca certs ...
	I0729 11:30:50.122442  135944 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.122611  135944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:30:50.122652  135944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:30:50.122659  135944 certs.go:256] generating profile certs ...
	I0729 11:30:50.122708  135944 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:30:50.122722  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt with IP's: []
	I0729 11:30:50.236541  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt ...
	I0729 11:30:50.236578  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt: {Name:mke8f3e6ec420b4c7ad08603a289200c805aa1e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.236794  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key ...
	I0729 11:30:50.236815  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key: {Name:mk60a2b766263435835454110f4741b531e9c8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.236933  135944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195
	I0729 11:30:50.236950  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.254]
	I0729 11:30:50.397902  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195 ...
	I0729 11:30:50.397936  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195: {Name:mkea92b7b889a15dc340672004a73ae9e111dde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.398125  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195 ...
	I0729 11:30:50.398141  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195: {Name:mkc391ea43924be309a9f605bb37e5b311e761f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.398237  135944 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.30d4e195 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:30:50.398311  135944 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.30d4e195 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:30:50.398362  135944 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:30:50.398376  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt with IP's: []
	I0729 11:30:50.480000  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt ...
	I0729 11:30:50.480032  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt: {Name:mk49c3bb32e9caa3f7ce2caa9de725305139b3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.480215  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key ...
	I0729 11:30:50.480228  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key: {Name:mkb7840f70ca8e3ba14ae9bc295eaa388bf6d4c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:30:50.480322  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:30:50.480341  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:30:50.480353  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:30:50.480369  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:30:50.480381  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:30:50.480392  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:30:50.480403  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:30:50.480413  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:30:50.480463  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:30:50.480499  135944 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:30:50.480506  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:30:50.480525  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:30:50.480543  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:30:50.480567  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:30:50.480602  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:30:50.480629  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.480643  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.480655  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.481276  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:30:50.505675  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:30:50.528598  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:30:50.551240  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:30:50.574514  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 11:30:50.597618  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:30:50.620388  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:30:50.643742  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:30:50.666672  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:30:50.689570  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:30:50.713112  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:30:50.736650  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:30:50.753003  135944 ssh_runner.go:195] Run: openssl version
	I0729 11:30:50.758628  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:30:50.769691  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.774166  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.774229  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:30:50.780217  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:30:50.791006  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:30:50.801847  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.806346  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.806411  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:30:50.811983  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:30:50.822657  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:30:50.833165  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.837715  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.837767  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:30:50.843180  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:30:50.853861  135944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:30:50.857916  135944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:30:50.857981  135944 kubeadm.go:392] StartCluster: {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:30:50.858080  135944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:30:50.858140  135944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:30:50.894092  135944 cri.go:89] found id: ""
	I0729 11:30:50.894183  135944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:30:50.906886  135944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:30:50.917543  135944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:30:50.930252  135944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:30:50.930286  135944 kubeadm.go:157] found existing configuration files:
	
	I0729 11:30:50.930340  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:30:50.939612  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:30:50.939684  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:30:50.949518  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:30:50.958597  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:30:50.958674  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:30:50.968466  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:30:50.981832  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:30:50.981893  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:30:50.991645  135944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:30:51.000921  135944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:30:51.001010  135944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:30:51.010557  135944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 11:30:51.234055  135944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 11:31:02.205155  135944 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:31:02.205229  135944 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:31:02.205354  135944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:31:02.205494  135944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:31:02.205599  135944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:31:02.205653  135944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:31:02.207264  135944 out.go:204]   - Generating certificates and keys ...
	I0729 11:31:02.207345  135944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:31:02.207404  135944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:31:02.207472  135944 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 11:31:02.207523  135944 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 11:31:02.207575  135944 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 11:31:02.207646  135944 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 11:31:02.207721  135944 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 11:31:02.207887  135944 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-691698 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0729 11:31:02.207964  135944 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 11:31:02.208086  135944 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-691698 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0729 11:31:02.208142  135944 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 11:31:02.208194  135944 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 11:31:02.208231  135944 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 11:31:02.208277  135944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:31:02.208321  135944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:31:02.208371  135944 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:31:02.208416  135944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:31:02.208467  135944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:31:02.208523  135944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:31:02.208617  135944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:31:02.208708  135944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:31:02.211074  135944 out.go:204]   - Booting up control plane ...
	I0729 11:31:02.211166  135944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:31:02.211269  135944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:31:02.211363  135944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:31:02.211495  135944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:31:02.211608  135944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:31:02.211667  135944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:31:02.211816  135944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:31:02.211922  135944 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:31:02.212011  135944 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.346862ms
	I0729 11:31:02.212119  135944 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:31:02.212345  135944 kubeadm.go:310] [api-check] The API server is healthy after 5.946822842s
	I0729 11:31:02.212488  135944 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 11:31:02.212602  135944 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 11:31:02.212652  135944 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 11:31:02.212799  135944 kubeadm.go:310] [mark-control-plane] Marking the node ha-691698 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 11:31:02.212876  135944 kubeadm.go:310] [bootstrap-token] Using token: m6i535.jxv009nwzx1o5m73
	I0729 11:31:02.214392  135944 out.go:204]   - Configuring RBAC rules ...
	I0729 11:31:02.214544  135944 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 11:31:02.214665  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 11:31:02.214809  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 11:31:02.214965  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 11:31:02.215096  135944 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 11:31:02.215219  135944 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 11:31:02.215346  135944 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 11:31:02.215410  135944 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 11:31:02.215480  135944 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 11:31:02.215489  135944 kubeadm.go:310] 
	I0729 11:31:02.215572  135944 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 11:31:02.215585  135944 kubeadm.go:310] 
	I0729 11:31:02.215642  135944 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 11:31:02.215649  135944 kubeadm.go:310] 
	I0729 11:31:02.215676  135944 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 11:31:02.215722  135944 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 11:31:02.215770  135944 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 11:31:02.215776  135944 kubeadm.go:310] 
	I0729 11:31:02.215816  135944 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 11:31:02.215821  135944 kubeadm.go:310] 
	I0729 11:31:02.215857  135944 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 11:31:02.215862  135944 kubeadm.go:310] 
	I0729 11:31:02.215907  135944 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 11:31:02.215968  135944 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 11:31:02.216056  135944 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 11:31:02.216065  135944 kubeadm.go:310] 
	I0729 11:31:02.216170  135944 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 11:31:02.216244  135944 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 11:31:02.216250  135944 kubeadm.go:310] 
	I0729 11:31:02.216313  135944 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m6i535.jxv009nwzx1o5m73 \
	I0729 11:31:02.216399  135944 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb \
	I0729 11:31:02.216424  135944 kubeadm.go:310] 	--control-plane 
	I0729 11:31:02.216430  135944 kubeadm.go:310] 
	I0729 11:31:02.216519  135944 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 11:31:02.216529  135944 kubeadm.go:310] 
	I0729 11:31:02.216629  135944 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m6i535.jxv009nwzx1o5m73 \
	I0729 11:31:02.216786  135944 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb 
	I0729 11:31:02.216800  135944 cni.go:84] Creating CNI manager for ""
	I0729 11:31:02.216807  135944 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 11:31:02.218300  135944 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 11:31:02.219570  135944 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 11:31:02.225110  135944 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 11:31:02.225131  135944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 11:31:02.247312  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 11:31:02.584841  135944 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 11:31:02.585048  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:02.585050  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-691698 minikube.k8s.io/updated_at=2024_07_29T11_31_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=ha-691698 minikube.k8s.io/primary=true
	I0729 11:31:02.614359  135944 ops.go:34] apiserver oom_adj: -16
	I0729 11:31:02.740749  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:03.241732  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:03.741794  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:04.241836  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:04.740889  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:05.241753  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:05.741367  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:06.240788  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:06.741339  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:07.241145  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:07.741621  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:08.241500  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:08.741511  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:09.240825  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:09.740880  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:10.240772  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:10.741587  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:11.240828  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:11.741204  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:12.241449  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:12.741830  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:13.240845  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:13.740842  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:14.241264  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:14.741742  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 11:31:14.822131  135944 kubeadm.go:1113] duration metric: took 12.237156383s to wait for elevateKubeSystemPrivileges
	I0729 11:31:14.822176  135944 kubeadm.go:394] duration metric: took 23.964200026s to StartCluster
	I0729 11:31:14.822211  135944 settings.go:142] acquiring lock: {Name:mkb2a487c2f52476061a6d736b8e75563062eb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:14.822372  135944 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:31:14.823354  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:14.823640  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 11:31:14.823645  135944 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:31:14.823673  135944 start.go:241] waiting for startup goroutines ...
	I0729 11:31:14.823684  135944 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:31:14.823757  135944 addons.go:69] Setting storage-provisioner=true in profile "ha-691698"
	I0729 11:31:14.823774  135944 addons.go:69] Setting default-storageclass=true in profile "ha-691698"
	I0729 11:31:14.823804  135944 addons.go:234] Setting addon storage-provisioner=true in "ha-691698"
	I0729 11:31:14.823826  135944 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-691698"
	I0729 11:31:14.823843  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:14.823849  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:14.824176  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.824204  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.824253  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.824289  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.839549  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0729 11:31:14.839574  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0729 11:31:14.840035  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.840066  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.840574  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.840600  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.840711  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.840746  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.841012  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.841145  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.841333  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:14.841564  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.841605  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.843849  135944 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:31:14.844190  135944 kapi.go:59] client config for ha-691698: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 11:31:14.844798  135944 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 11:31:14.845096  135944 addons.go:234] Setting addon default-storageclass=true in "ha-691698"
	I0729 11:31:14.845145  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:14.845522  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.845571  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.857812  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0729 11:31:14.858355  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.858867  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.858888  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.859215  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.859416  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:14.861205  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:14.861859  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0729 11:31:14.862316  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.862820  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.862839  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.863202  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.863737  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:14.863762  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:14.864062  135944 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:31:14.865602  135944 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:31:14.865625  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:31:14.865645  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:14.868672  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.869150  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:14.869179  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.869478  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:14.869675  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:14.869840  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:14.869982  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:14.879437  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0729 11:31:14.879925  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:14.880438  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:14.880465  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:14.880794  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:14.881008  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:14.882673  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:14.882889  135944 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:31:14.882906  135944 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:31:14.882921  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:14.886201  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.886830  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:14.886851  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:14.887028  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:14.887231  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:14.887384  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:14.887515  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:14.988741  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 11:31:15.030626  135944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:31:15.031691  135944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:31:15.427595  135944 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 11:31:15.736784  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.736800  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.736819  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.736809  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.737163  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737181  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737191  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.737203  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.737212  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737223  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737232  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.737239  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.737496  135944 main.go:141] libmachine: (ha-691698) DBG | Closing plugin on server side
	I0729 11:31:15.737500  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737514  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737521  135944 main.go:141] libmachine: (ha-691698) DBG | Closing plugin on server side
	I0729 11:31:15.737548  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.737557  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.737628  135944 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 11:31:15.737642  135944 round_trippers.go:469] Request Headers:
	I0729 11:31:15.737653  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:31:15.737658  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:31:15.745808  135944 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 11:31:15.746362  135944 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 11:31:15.746379  135944 round_trippers.go:469] Request Headers:
	I0729 11:31:15.746389  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:31:15.746396  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:31:15.746399  135944 round_trippers.go:473]     Content-Type: application/json
	I0729 11:31:15.749136  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:31:15.749347  135944 main.go:141] libmachine: Making call to close driver server
	I0729 11:31:15.749363  135944 main.go:141] libmachine: (ha-691698) Calling .Close
	I0729 11:31:15.749662  135944 main.go:141] libmachine: Successfully made call to close driver server
	I0729 11:31:15.749688  135944 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 11:31:15.751405  135944 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 11:31:15.752479  135944 addons.go:510] duration metric: took 928.790379ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 11:31:15.752531  135944 start.go:246] waiting for cluster config update ...
	I0729 11:31:15.752547  135944 start.go:255] writing updated cluster config ...
	I0729 11:31:15.754105  135944 out.go:177] 
	I0729 11:31:15.755518  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:15.755612  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:31:15.757155  135944 out.go:177] * Starting "ha-691698-m02" control-plane node in "ha-691698" cluster
	I0729 11:31:15.758504  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:31:15.758532  135944 cache.go:56] Caching tarball of preloaded images
	I0729 11:31:15.758627  135944 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:31:15.758638  135944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:31:15.758711  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:31:15.758888  135944 start.go:360] acquireMachinesLock for ha-691698-m02: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:31:15.758928  135944 start.go:364] duration metric: took 21.733µs to acquireMachinesLock for "ha-691698-m02"
	I0729 11:31:15.758945  135944 start.go:93] Provisioning new machine with config: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:31:15.759010  135944 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 11:31:15.760628  135944 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:31:15.760723  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:15.760748  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:15.775902  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0729 11:31:15.776462  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:15.777074  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:15.777098  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:15.777420  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:15.777662  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:15.777816  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:15.778019  135944 start.go:159] libmachine.API.Create for "ha-691698" (driver="kvm2")
	I0729 11:31:15.778049  135944 client.go:168] LocalClient.Create starting
	I0729 11:31:15.778088  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 11:31:15.778137  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:31:15.778160  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:31:15.778233  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 11:31:15.778262  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:31:15.778285  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:31:15.778310  135944 main.go:141] libmachine: Running pre-create checks...
	I0729 11:31:15.778321  135944 main.go:141] libmachine: (ha-691698-m02) Calling .PreCreateCheck
	I0729 11:31:15.778519  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetConfigRaw
	I0729 11:31:15.778989  135944 main.go:141] libmachine: Creating machine...
	I0729 11:31:15.779008  135944 main.go:141] libmachine: (ha-691698-m02) Calling .Create
	I0729 11:31:15.779154  135944 main.go:141] libmachine: (ha-691698-m02) Creating KVM machine...
	I0729 11:31:15.780378  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found existing default KVM network
	I0729 11:31:15.780497  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found existing private KVM network mk-ha-691698
	I0729 11:31:15.780661  135944 main.go:141] libmachine: (ha-691698-m02) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02 ...
	I0729 11:31:15.780692  135944 main.go:141] libmachine: (ha-691698-m02) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:31:15.780757  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:15.780649  136343 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:31:15.780870  135944 main.go:141] libmachine: (ha-691698-m02) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:31:16.039880  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:16.039726  136343 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa...
	I0729 11:31:16.284500  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:16.284340  136343 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/ha-691698-m02.rawdisk...
	I0729 11:31:16.284534  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Writing magic tar header
	I0729 11:31:16.284550  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Writing SSH key tar header
	I0729 11:31:16.284562  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:16.284452  136343 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02 ...
	I0729 11:31:16.284607  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02
	I0729 11:31:16.284647  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 11:31:16.284669  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:31:16.284693  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02 (perms=drwx------)
	I0729 11:31:16.284708  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:31:16.284714  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 11:31:16.284725  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 11:31:16.284737  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:31:16.284756  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 11:31:16.284772  135944 main.go:141] libmachine: (ha-691698-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:31:16.284779  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:31:16.284798  135944 main.go:141] libmachine: (ha-691698-m02) Creating domain...
	I0729 11:31:16.284816  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:31:16.284832  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Checking permissions on dir: /home
	I0729 11:31:16.284840  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Skipping /home - not owner
	I0729 11:31:16.286006  135944 main.go:141] libmachine: (ha-691698-m02) define libvirt domain using xml: 
	I0729 11:31:16.286031  135944 main.go:141] libmachine: (ha-691698-m02) <domain type='kvm'>
	I0729 11:31:16.286048  135944 main.go:141] libmachine: (ha-691698-m02)   <name>ha-691698-m02</name>
	I0729 11:31:16.286056  135944 main.go:141] libmachine: (ha-691698-m02)   <memory unit='MiB'>2200</memory>
	I0729 11:31:16.286068  135944 main.go:141] libmachine: (ha-691698-m02)   <vcpu>2</vcpu>
	I0729 11:31:16.286074  135944 main.go:141] libmachine: (ha-691698-m02)   <features>
	I0729 11:31:16.286084  135944 main.go:141] libmachine: (ha-691698-m02)     <acpi/>
	I0729 11:31:16.286090  135944 main.go:141] libmachine: (ha-691698-m02)     <apic/>
	I0729 11:31:16.286101  135944 main.go:141] libmachine: (ha-691698-m02)     <pae/>
	I0729 11:31:16.286114  135944 main.go:141] libmachine: (ha-691698-m02)     
	I0729 11:31:16.286148  135944 main.go:141] libmachine: (ha-691698-m02)   </features>
	I0729 11:31:16.286174  135944 main.go:141] libmachine: (ha-691698-m02)   <cpu mode='host-passthrough'>
	I0729 11:31:16.286187  135944 main.go:141] libmachine: (ha-691698-m02)   
	I0729 11:31:16.286198  135944 main.go:141] libmachine: (ha-691698-m02)   </cpu>
	I0729 11:31:16.286208  135944 main.go:141] libmachine: (ha-691698-m02)   <os>
	I0729 11:31:16.286218  135944 main.go:141] libmachine: (ha-691698-m02)     <type>hvm</type>
	I0729 11:31:16.286228  135944 main.go:141] libmachine: (ha-691698-m02)     <boot dev='cdrom'/>
	I0729 11:31:16.286237  135944 main.go:141] libmachine: (ha-691698-m02)     <boot dev='hd'/>
	I0729 11:31:16.286247  135944 main.go:141] libmachine: (ha-691698-m02)     <bootmenu enable='no'/>
	I0729 11:31:16.286257  135944 main.go:141] libmachine: (ha-691698-m02)   </os>
	I0729 11:31:16.286267  135944 main.go:141] libmachine: (ha-691698-m02)   <devices>
	I0729 11:31:16.286276  135944 main.go:141] libmachine: (ha-691698-m02)     <disk type='file' device='cdrom'>
	I0729 11:31:16.286293  135944 main.go:141] libmachine: (ha-691698-m02)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/boot2docker.iso'/>
	I0729 11:31:16.286305  135944 main.go:141] libmachine: (ha-691698-m02)       <target dev='hdc' bus='scsi'/>
	I0729 11:31:16.286317  135944 main.go:141] libmachine: (ha-691698-m02)       <readonly/>
	I0729 11:31:16.286327  135944 main.go:141] libmachine: (ha-691698-m02)     </disk>
	I0729 11:31:16.286365  135944 main.go:141] libmachine: (ha-691698-m02)     <disk type='file' device='disk'>
	I0729 11:31:16.286391  135944 main.go:141] libmachine: (ha-691698-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:31:16.286408  135944 main.go:141] libmachine: (ha-691698-m02)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/ha-691698-m02.rawdisk'/>
	I0729 11:31:16.286421  135944 main.go:141] libmachine: (ha-691698-m02)       <target dev='hda' bus='virtio'/>
	I0729 11:31:16.286434  135944 main.go:141] libmachine: (ha-691698-m02)     </disk>
	I0729 11:31:16.286444  135944 main.go:141] libmachine: (ha-691698-m02)     <interface type='network'>
	I0729 11:31:16.286455  135944 main.go:141] libmachine: (ha-691698-m02)       <source network='mk-ha-691698'/>
	I0729 11:31:16.286469  135944 main.go:141] libmachine: (ha-691698-m02)       <model type='virtio'/>
	I0729 11:31:16.286483  135944 main.go:141] libmachine: (ha-691698-m02)     </interface>
	I0729 11:31:16.286491  135944 main.go:141] libmachine: (ha-691698-m02)     <interface type='network'>
	I0729 11:31:16.286518  135944 main.go:141] libmachine: (ha-691698-m02)       <source network='default'/>
	I0729 11:31:16.286529  135944 main.go:141] libmachine: (ha-691698-m02)       <model type='virtio'/>
	I0729 11:31:16.286538  135944 main.go:141] libmachine: (ha-691698-m02)     </interface>
	I0729 11:31:16.286554  135944 main.go:141] libmachine: (ha-691698-m02)     <serial type='pty'>
	I0729 11:31:16.286565  135944 main.go:141] libmachine: (ha-691698-m02)       <target port='0'/>
	I0729 11:31:16.286575  135944 main.go:141] libmachine: (ha-691698-m02)     </serial>
	I0729 11:31:16.286588  135944 main.go:141] libmachine: (ha-691698-m02)     <console type='pty'>
	I0729 11:31:16.286603  135944 main.go:141] libmachine: (ha-691698-m02)       <target type='serial' port='0'/>
	I0729 11:31:16.286616  135944 main.go:141] libmachine: (ha-691698-m02)     </console>
	I0729 11:31:16.286630  135944 main.go:141] libmachine: (ha-691698-m02)     <rng model='virtio'>
	I0729 11:31:16.286651  135944 main.go:141] libmachine: (ha-691698-m02)       <backend model='random'>/dev/random</backend>
	I0729 11:31:16.286664  135944 main.go:141] libmachine: (ha-691698-m02)     </rng>
	I0729 11:31:16.286676  135944 main.go:141] libmachine: (ha-691698-m02)     
	I0729 11:31:16.286683  135944 main.go:141] libmachine: (ha-691698-m02)     
	I0729 11:31:16.286689  135944 main.go:141] libmachine: (ha-691698-m02)   </devices>
	I0729 11:31:16.286697  135944 main.go:141] libmachine: (ha-691698-m02) </domain>
	I0729 11:31:16.286702  135944 main.go:141] libmachine: (ha-691698-m02) 
	I0729 11:31:16.293362  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:f4:3e in network default
	I0729 11:31:16.293916  135944 main.go:141] libmachine: (ha-691698-m02) Ensuring networks are active...
	I0729 11:31:16.293953  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:16.294649  135944 main.go:141] libmachine: (ha-691698-m02) Ensuring network default is active
	I0729 11:31:16.294929  135944 main.go:141] libmachine: (ha-691698-m02) Ensuring network mk-ha-691698 is active
	I0729 11:31:16.295288  135944 main.go:141] libmachine: (ha-691698-m02) Getting domain xml...
	I0729 11:31:16.296007  135944 main.go:141] libmachine: (ha-691698-m02) Creating domain...
	I0729 11:31:17.580708  135944 main.go:141] libmachine: (ha-691698-m02) Waiting to get IP...
	I0729 11:31:17.581590  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:17.582071  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:17.582147  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:17.582060  136343 retry.go:31] will retry after 191.312407ms: waiting for machine to come up
	I0729 11:31:17.775740  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:17.776302  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:17.776332  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:17.776257  136343 retry.go:31] will retry after 262.18085ms: waiting for machine to come up
	I0729 11:31:18.039882  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:18.040360  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:18.040387  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:18.040320  136343 retry.go:31] will retry after 395.238801ms: waiting for machine to come up
	I0729 11:31:18.436806  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:18.437275  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:18.437312  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:18.437230  136343 retry.go:31] will retry after 467.322595ms: waiting for machine to come up
	I0729 11:31:18.905902  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:18.906302  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:18.906331  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:18.906255  136343 retry.go:31] will retry after 576.65986ms: waiting for machine to come up
	I0729 11:31:19.485198  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:19.485593  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:19.485622  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:19.485551  136343 retry.go:31] will retry after 792.662051ms: waiting for machine to come up
	I0729 11:31:20.279605  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:20.280004  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:20.280034  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:20.279951  136343 retry.go:31] will retry after 866.125195ms: waiting for machine to come up
	I0729 11:31:21.147263  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:21.147675  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:21.147699  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:21.147600  136343 retry.go:31] will retry after 1.459748931s: waiting for machine to come up
	I0729 11:31:22.609018  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:22.609433  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:22.609462  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:22.609386  136343 retry.go:31] will retry after 1.125830798s: waiting for machine to come up
	I0729 11:31:23.736689  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:23.737103  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:23.737123  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:23.737058  136343 retry.go:31] will retry after 1.852479279s: waiting for machine to come up
	I0729 11:31:25.591695  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:25.592063  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:25.592096  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:25.591997  136343 retry.go:31] will retry after 2.458375742s: waiting for machine to come up
	I0729 11:31:28.053015  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:28.053440  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:28.053465  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:28.053381  136343 retry.go:31] will retry after 3.563552308s: waiting for machine to come up
	I0729 11:31:31.618061  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:31.618375  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find current IP address of domain ha-691698-m02 in network mk-ha-691698
	I0729 11:31:31.618408  135944 main.go:141] libmachine: (ha-691698-m02) DBG | I0729 11:31:31.618358  136343 retry.go:31] will retry after 3.854966211s: waiting for machine to come up
	I0729 11:31:35.477501  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.478142  135944 main.go:141] libmachine: (ha-691698-m02) Found IP for machine: 192.168.39.5
	I0729 11:31:35.478165  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has current primary IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.478173  135944 main.go:141] libmachine: (ha-691698-m02) Reserving static IP address...
	I0729 11:31:35.478628  135944 main.go:141] libmachine: (ha-691698-m02) DBG | unable to find host DHCP lease matching {name: "ha-691698-m02", mac: "52:54:00:d9:b5:f9", ip: "192.168.39.5"} in network mk-ha-691698
	I0729 11:31:35.557297  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Getting to WaitForSSH function...
	I0729 11:31:35.557325  135944 main.go:141] libmachine: (ha-691698-m02) Reserved static IP address: 192.168.39.5
	I0729 11:31:35.557340  135944 main.go:141] libmachine: (ha-691698-m02) Waiting for SSH to be available...
	I0729 11:31:35.560072  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.560373  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.560404  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.560575  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Using SSH client type: external
	I0729 11:31:35.560604  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa (-rw-------)
	I0729 11:31:35.560631  135944 main.go:141] libmachine: (ha-691698-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:31:35.560649  135944 main.go:141] libmachine: (ha-691698-m02) DBG | About to run SSH command:
	I0729 11:31:35.560675  135944 main.go:141] libmachine: (ha-691698-m02) DBG | exit 0
	I0729 11:31:35.681001  135944 main.go:141] libmachine: (ha-691698-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 11:31:35.681285  135944 main.go:141] libmachine: (ha-691698-m02) KVM machine creation complete!
	I0729 11:31:35.681579  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetConfigRaw
	I0729 11:31:35.682171  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:35.682336  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:35.682514  135944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:31:35.682529  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:31:35.683728  135944 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:31:35.683746  135944 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:31:35.683755  135944 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:31:35.683763  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:35.685972  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.686383  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.686416  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.686596  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:35.686813  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.687018  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.687198  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:35.687403  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:35.687625  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:35.687637  135944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:31:35.788272  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:31:35.788298  135944 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:31:35.788308  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:35.791487  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.791828  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.791858  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.792005  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:35.792238  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.792397  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.792509  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:35.792681  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:35.792852  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:35.792862  135944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:31:35.893736  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:31:35.893890  135944 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:31:35.893906  135944 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:31:35.893919  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:35.894240  135944 buildroot.go:166] provisioning hostname "ha-691698-m02"
	I0729 11:31:35.894272  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:35.894471  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:35.897214  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.897570  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:35.897592  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:35.897759  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:35.897946  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.898118  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:35.898265  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:35.898409  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:35.898622  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:35.898640  135944 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698-m02 && echo "ha-691698-m02" | sudo tee /etc/hostname
	I0729 11:31:36.010748  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698-m02
	
	I0729 11:31:36.010780  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.013698  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.014125  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.014152  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.014349  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.014517  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.014666  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.014784  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.014939  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:36.015109  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:36.015125  135944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:31:36.122113  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:31:36.122143  135944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:31:36.122158  135944 buildroot.go:174] setting up certificates
	I0729 11:31:36.122166  135944 provision.go:84] configureAuth start
	I0729 11:31:36.122175  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetMachineName
	I0729 11:31:36.122491  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:36.125054  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.125439  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.125478  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.125648  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.128887  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.129341  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.129374  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.129535  135944 provision.go:143] copyHostCerts
	I0729 11:31:36.129583  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:31:36.129629  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:31:36.129650  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:31:36.129737  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:31:36.129829  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:31:36.129854  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:31:36.129865  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:31:36.129902  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:31:36.129960  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:31:36.129983  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:31:36.129991  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:31:36.130022  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:31:36.130087  135944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698-m02 san=[127.0.0.1 192.168.39.5 ha-691698-m02 localhost minikube]
	I0729 11:31:36.194045  135944 provision.go:177] copyRemoteCerts
	I0729 11:31:36.194107  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:31:36.194134  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.196817  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.197150  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.197186  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.197398  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.197611  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.197785  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.197925  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:36.274662  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:31:36.274750  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 11:31:36.299147  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:31:36.299218  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:31:36.326189  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:31:36.326261  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:31:36.350451  135944 provision.go:87] duration metric: took 228.271408ms to configureAuth
	I0729 11:31:36.350484  135944 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:31:36.350653  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:36.350747  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.353558  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.353954  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.353983  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.354146  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.354377  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.354595  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.354759  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.354918  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:36.355102  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:36.355121  135944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:31:36.606394  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:31:36.606422  135944 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:31:36.606431  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetURL
	I0729 11:31:36.607804  135944 main.go:141] libmachine: (ha-691698-m02) DBG | Using libvirt version 6000000
	I0729 11:31:36.610317  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.610731  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.610759  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.610980  135944 main.go:141] libmachine: Docker is up and running!
	I0729 11:31:36.610997  135944 main.go:141] libmachine: Reticulating splines...
	I0729 11:31:36.611006  135944 client.go:171] duration metric: took 20.832947089s to LocalClient.Create
	I0729 11:31:36.611040  135944 start.go:167] duration metric: took 20.833025153s to libmachine.API.Create "ha-691698"
	I0729 11:31:36.611053  135944 start.go:293] postStartSetup for "ha-691698-m02" (driver="kvm2")
	I0729 11:31:36.611065  135944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:31:36.611083  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.611356  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:31:36.611390  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.613595  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.614001  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.614027  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.614134  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.614328  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.614472  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.614605  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:36.695117  135944 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:31:36.699498  135944 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:31:36.699535  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:31:36.699607  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:31:36.699696  135944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:31:36.699709  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:31:36.699810  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:31:36.709245  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:31:36.733214  135944 start.go:296] duration metric: took 122.138653ms for postStartSetup
	I0729 11:31:36.733269  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetConfigRaw
	I0729 11:31:36.733931  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:36.736353  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.736792  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.736819  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.737081  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:31:36.737280  135944 start.go:128] duration metric: took 20.978258321s to createHost
	I0729 11:31:36.737310  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.739797  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.740128  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.740153  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.740299  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.740492  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.740678  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.740873  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.741046  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:31:36.741203  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0729 11:31:36.741220  135944 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:31:36.841808  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252696.814837720
	
	I0729 11:31:36.841844  135944 fix.go:216] guest clock: 1722252696.814837720
	I0729 11:31:36.841856  135944 fix.go:229] Guest: 2024-07-29 11:31:36.81483772 +0000 UTC Remote: 2024-07-29 11:31:36.737293619 +0000 UTC m=+77.663462696 (delta=77.544101ms)
	I0729 11:31:36.841882  135944 fix.go:200] guest clock delta is within tolerance: 77.544101ms
	I0729 11:31:36.841892  135944 start.go:83] releasing machines lock for "ha-691698-m02", held for 21.082953845s
	I0729 11:31:36.841922  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.842211  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:36.844903  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.845368  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.845393  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.847842  135944 out.go:177] * Found network options:
	I0729 11:31:36.849230  135944 out.go:177]   - NO_PROXY=192.168.39.244
	W0729 11:31:36.850468  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:31:36.850506  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.851204  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.851482  135944 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:31:36.851590  135944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:31:36.851637  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	W0729 11:31:36.851728  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:31:36.851824  135944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:31:36.851844  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:31:36.854612  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.854714  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.855000  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.855016  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.855131  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:36.855149  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:36.855197  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.855373  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.855377  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:31:36.855528  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:31:36.855542  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.855730  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:31:36.855734  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:36.855875  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:31:37.093020  135944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:31:37.099209  135944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:31:37.099274  135944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:31:37.115886  135944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:31:37.115920  135944 start.go:495] detecting cgroup driver to use...
	I0729 11:31:37.115990  135944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:31:37.132295  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:31:37.147287  135944 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:31:37.147351  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:31:37.161781  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:31:37.176933  135944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:31:37.295712  135944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:31:37.452905  135944 docker.go:233] disabling docker service ...
	I0729 11:31:37.452982  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:31:37.469595  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:31:37.483195  135944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:31:37.602172  135944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:31:37.720769  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:31:37.735389  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:31:37.753521  135944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:31:37.753587  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.763991  135944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:31:37.764067  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.774506  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.784887  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.795970  135944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:31:37.807081  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.817852  135944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.836275  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:31:37.847356  135944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:31:37.857326  135944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:31:37.857388  135944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:31:37.870174  135944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:31:37.879634  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:31:37.997156  135944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:31:38.130010  135944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:31:38.130111  135944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:31:38.134562  135944 start.go:563] Will wait 60s for crictl version
	I0729 11:31:38.134632  135944 ssh_runner.go:195] Run: which crictl
	I0729 11:31:38.138170  135944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:31:38.174752  135944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:31:38.174842  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:31:38.203078  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:31:38.232064  135944 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:31:38.233512  135944 out.go:177]   - env NO_PROXY=192.168.39.244
	I0729 11:31:38.234852  135944 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:31:38.237817  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:38.238244  135944 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:31:30 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:31:38.238273  135944 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:31:38.238622  135944 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:31:38.243071  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:31:38.255641  135944 mustload.go:65] Loading cluster: ha-691698
	I0729 11:31:38.255931  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:31:38.256285  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:38.256318  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:38.271253  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0729 11:31:38.271745  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:38.272312  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:38.272343  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:38.272709  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:38.272944  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:31:38.274470  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:38.274782  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:38.274810  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:38.289920  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34917
	I0729 11:31:38.290400  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:38.290915  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:38.290938  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:38.291288  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:38.291514  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:38.291693  135944 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.5
	I0729 11:31:38.291705  135944 certs.go:194] generating shared ca certs ...
	I0729 11:31:38.291720  135944 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:38.291842  135944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:31:38.291876  135944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:31:38.291882  135944 certs.go:256] generating profile certs ...
	I0729 11:31:38.291946  135944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:31:38.291973  135944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0
	I0729 11:31:38.291992  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.5 192.168.39.254]
	I0729 11:31:38.495951  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0 ...
	I0729 11:31:38.495990  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0: {Name:mk6b82ec14c3b68f14a2634e48c65b4e1a7c231d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:38.496202  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0 ...
	I0729 11:31:38.496221  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0: {Name:mk3f9d4694c2ebbbe9aa6512e9bb831c319706dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:31:38.496320  135944 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.2a0997b0 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:31:38.496508  135944 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.2a0997b0 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:31:38.496685  135944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:31:38.496706  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:31:38.496728  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:31:38.496745  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:31:38.496762  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:31:38.496778  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:31:38.496792  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:31:38.496808  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:31:38.496825  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:31:38.496888  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:31:38.496927  135944 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:31:38.496939  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:31:38.496997  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:31:38.497030  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:31:38.497060  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:31:38.497129  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:31:38.497176  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:31:38.497196  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:38.497212  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:31:38.497255  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:38.500526  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:38.501085  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:38.501117  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:38.501328  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:38.501601  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:38.501780  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:38.501940  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:38.573385  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 11:31:38.577821  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 11:31:38.588284  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 11:31:38.592433  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 11:31:38.603257  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 11:31:38.607444  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 11:31:38.618229  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 11:31:38.622201  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 11:31:38.632762  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 11:31:38.636624  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 11:31:38.647369  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 11:31:38.651661  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 11:31:38.663814  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:31:38.689251  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:31:38.713899  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:31:38.739071  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:31:38.763569  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 11:31:38.788039  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:31:38.811576  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:31:38.835147  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:31:38.859125  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:31:38.883230  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:31:38.907205  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:31:38.931538  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 11:31:38.947902  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 11:31:38.963980  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 11:31:38.981027  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 11:31:38.998734  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 11:31:39.016508  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 11:31:39.033781  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 11:31:39.051226  135944 ssh_runner.go:195] Run: openssl version
	I0729 11:31:39.057136  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:31:39.068199  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:31:39.072744  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:31:39.072820  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:31:39.078499  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:31:39.088593  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:31:39.098647  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:39.102972  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:39.103021  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:31:39.108554  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:31:39.118800  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:31:39.130875  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:31:39.135328  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:31:39.135384  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:31:39.141022  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:31:39.151436  135944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:31:39.155155  135944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:31:39.155218  135944 kubeadm.go:934] updating node {m02 192.168.39.5 8443 v1.30.3 crio true true} ...
	I0729 11:31:39.155311  135944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:31:39.155340  135944 kube-vip.go:115] generating kube-vip config ...
	I0729 11:31:39.155384  135944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:31:39.169695  135944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:31:39.169778  135944 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:31:39.169850  135944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:31:39.179366  135944 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 11:31:39.179449  135944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 11:31:39.189303  135944 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 11:31:39.189319  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 11:31:39.189352  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:31:39.189314  135944 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 11:31:39.189437  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:31:39.193697  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 11:31:39.193731  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 11:31:40.054339  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:31:40.054418  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:31:40.058977  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 11:31:40.059015  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 11:31:41.023647  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:31:41.038178  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:31:41.038271  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:31:41.042570  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 11:31:41.042608  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 11:31:41.443856  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 11:31:41.454023  135944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 11:31:41.471925  135944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:31:41.489188  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 11:31:41.506561  135944 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:31:41.510602  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:31:41.523379  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:31:41.639890  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:31:41.657080  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:31:41.657571  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:31:41.657644  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:31:41.673101  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36583
	I0729 11:31:41.673703  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:31:41.674233  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:31:41.674262  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:31:41.674669  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:31:41.674934  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:31:41.675119  135944 start.go:317] joinCluster: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:31:41.675230  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 11:31:41.675249  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:31:41.678700  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:41.679123  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:31:41.679156  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:31:41.679335  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:31:41.679566  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:31:41.679797  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:31:41.679954  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:31:41.832817  135944 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:31:41.832865  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k6yan8.0jnfim4w1mm9t7gt --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m02 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443"
	I0729 11:32:03.840928  135944 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token k6yan8.0jnfim4w1mm9t7gt --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m02 --control-plane --apiserver-advertise-address=192.168.39.5 --apiserver-bind-port=8443": (22.008036047s)
	I0729 11:32:03.840980  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 11:32:04.393347  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-691698-m02 minikube.k8s.io/updated_at=2024_07_29T11_32_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=ha-691698 minikube.k8s.io/primary=false
	I0729 11:32:04.519579  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-691698-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 11:32:04.627302  135944 start.go:319] duration metric: took 22.952179045s to joinCluster
	I0729 11:32:04.627381  135944 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:32:04.627728  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:04.629027  135944 out.go:177] * Verifying Kubernetes components...
	I0729 11:32:04.630310  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:32:04.897544  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:32:04.960565  135944 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:32:04.960924  135944 kapi.go:59] client config for ha-691698: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 11:32:04.961035  135944 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.244:8443
	I0729 11:32:04.961309  135944 node_ready.go:35] waiting up to 6m0s for node "ha-691698-m02" to be "Ready" ...
	I0729 11:32:04.961427  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:04.961439  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:04.961451  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:04.961458  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:04.976233  135944 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0729 11:32:05.462214  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:05.462236  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:05.462247  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:05.462252  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:05.469225  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:32:05.961609  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:05.961637  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:05.961648  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:05.961653  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:05.966006  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:32:06.461773  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:06.461794  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:06.461802  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:06.461806  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:06.464891  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:06.962386  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:06.962410  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:06.962422  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:06.962428  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:06.965942  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:06.966492  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:07.461874  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:07.461905  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:07.461918  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:07.461922  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:07.465647  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:07.962231  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:07.962257  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:07.962266  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:07.962269  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:07.965721  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:08.461617  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:08.461644  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:08.461657  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:08.461661  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:08.465535  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:08.961603  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:08.961624  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:08.961632  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:08.961638  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:08.964880  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:09.461592  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:09.461616  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:09.461625  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:09.461630  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:09.465258  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:09.465956  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:09.962247  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:09.962270  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:09.962280  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:09.962286  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:09.965603  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:10.461758  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:10.461785  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:10.461797  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:10.461800  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:10.465371  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:10.962146  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:10.962169  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:10.962195  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:10.962200  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:10.965463  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:11.462343  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:11.462370  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:11.462380  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:11.462383  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:11.465533  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:11.466103  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:11.962024  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:11.962048  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:11.962063  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:11.962067  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:12.003235  135944 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0729 11:32:12.462430  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:12.462453  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:12.462464  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:12.462472  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:12.465495  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:12.962237  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:12.962263  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:12.962275  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:12.962283  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:12.965500  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:13.461971  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:13.461999  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:13.462010  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:13.462017  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:13.465564  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:13.466248  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:13.961600  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:13.961623  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:13.961632  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:13.961636  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:13.964983  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:14.462191  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:14.462217  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:14.462227  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:14.462232  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:14.465492  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:14.961997  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:14.962020  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:14.962028  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:14.962033  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:14.965348  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:15.462574  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:15.462600  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:15.462612  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:15.462617  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:15.465664  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:15.962062  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:15.962089  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:15.962100  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:15.962105  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:15.965647  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:15.966182  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:16.461554  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:16.461584  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:16.461597  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:16.461602  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:16.465049  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:16.962395  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:16.962420  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:16.962427  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:16.962437  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:16.966012  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:17.461621  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:17.461652  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:17.461664  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:17.461669  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:17.464702  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:17.961593  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:17.961619  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:17.961630  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:17.961636  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:17.964979  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:18.461865  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:18.461890  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:18.461899  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:18.461902  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:18.465345  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:18.465882  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:18.962343  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:18.962369  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:18.962380  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:18.962385  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:18.965722  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:19.462421  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:19.462442  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:19.462451  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:19.462457  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:19.467885  135944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 11:32:19.961966  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:19.961994  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:19.962008  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:19.962013  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:19.965705  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:20.462240  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:20.462262  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:20.462270  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:20.462274  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:20.465514  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:20.466129  135944 node_ready.go:53] node "ha-691698-m02" has status "Ready":"False"
	I0729 11:32:20.961822  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:20.961845  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:20.961853  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:20.961858  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:20.965865  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:21.461796  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:21.461822  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:21.461831  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:21.461834  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:21.465816  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:21.962363  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:21.962387  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:21.962395  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:21.962400  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:21.966068  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.462024  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:22.462045  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.462054  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.462058  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.465535  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.466076  135944 node_ready.go:49] node "ha-691698-m02" has status "Ready":"True"
	I0729 11:32:22.466096  135944 node_ready.go:38] duration metric: took 17.504767524s for node "ha-691698-m02" to be "Ready" ...
	I0729 11:32:22.466105  135944 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:32:22.466185  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:22.466191  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.466198  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.466203  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.471047  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:32:22.477901  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.477997  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p7zbj
	I0729 11:32:22.478008  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.478016  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.478020  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.482119  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:32:22.482944  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.482963  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.482973  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.482977  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.485028  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.485579  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.485602  135944 pod_ready.go:81] duration metric: took 7.674871ms for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.485616  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.485675  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r48d8
	I0729 11:32:22.485682  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.485690  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.485695  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.487932  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.488545  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.488561  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.488569  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.488574  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.490563  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:32:22.491019  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.491035  135944 pod_ready.go:81] duration metric: took 5.409217ms for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.491044  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.491090  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698
	I0729 11:32:22.491097  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.491105  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.491112  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.493261  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.493860  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.493874  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.493881  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.493884  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.495778  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:32:22.496373  135944 pod_ready.go:92] pod "etcd-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.496390  135944 pod_ready.go:81] duration metric: took 5.340632ms for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.496398  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.496438  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m02
	I0729 11:32:22.496446  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.496452  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.496456  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.498553  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:32:22.499056  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:22.499070  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.499076  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.499079  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.500984  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:32:22.501423  135944 pod_ready.go:92] pod "etcd-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.501440  135944 pod_ready.go:81] duration metric: took 5.035545ms for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.501459  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.662878  135944 request.go:629] Waited for 161.330614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:32:22.662969  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:32:22.662991  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.663010  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.663019  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.666154  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.862117  135944 request.go:629] Waited for 195.32766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.862184  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:22.862190  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:22.862198  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:22.862202  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:22.865231  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:22.865720  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:22.865739  135944 pod_ready.go:81] duration metric: took 364.269821ms for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:22.865751  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.062874  135944 request.go:629] Waited for 197.020122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:32:23.062949  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:32:23.062955  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.062962  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.062967  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.066551  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.262594  135944 request.go:629] Waited for 195.28852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:23.262667  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:23.262672  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.262682  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.262692  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.266285  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.266821  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:23.266840  135944 pod_ready.go:81] duration metric: took 401.080433ms for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.266850  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.463079  135944 request.go:629] Waited for 196.158228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:32:23.463139  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:32:23.463144  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.463151  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.463156  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.466869  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.662934  135944 request.go:629] Waited for 195.378415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:23.663000  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:23.663007  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.663020  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.663028  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.666276  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:23.666796  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:23.666814  135944 pod_ready.go:81] duration metric: took 399.956322ms for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.666831  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:23.862888  135944 request.go:629] Waited for 195.986941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:32:23.862976  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:32:23.862986  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:23.862999  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:23.863008  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:23.866406  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.062469  135944 request.go:629] Waited for 195.391025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.062557  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.062566  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.062575  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.062580  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.065813  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.066574  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:24.066592  135944 pod_ready.go:81] duration metric: took 399.755147ms for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.066605  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.262378  135944 request.go:629] Waited for 195.696313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:32:24.262454  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:32:24.262462  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.262473  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.262477  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.265878  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.462991  135944 request.go:629] Waited for 196.437934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:24.463048  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:24.463057  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.463066  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.463069  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.466620  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.467378  135944 pod_ready.go:92] pod "kube-proxy-5hn2s" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:24.467397  135944 pod_ready.go:81] duration metric: took 400.785631ms for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.467407  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.662603  135944 request.go:629] Waited for 195.10343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:32:24.662664  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:32:24.662672  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.662679  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.662683  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.666538  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.862352  135944 request.go:629] Waited for 195.153062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.862426  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:24.862431  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:24.862439  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:24.862444  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:24.865910  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:24.866463  135944 pod_ready.go:92] pod "kube-proxy-8p4nc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:24.866485  135944 pod_ready.go:81] duration metric: took 399.072237ms for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:24.866496  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.062674  135944 request.go:629] Waited for 196.089202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:32:25.062751  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:32:25.062761  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.062771  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.062777  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.066281  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.262198  135944 request.go:629] Waited for 195.303482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:25.262275  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:32:25.262280  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.262288  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.262292  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.265726  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.266373  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:25.266403  135944 pod_ready.go:81] duration metric: took 399.899992ms for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.266415  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.462558  135944 request.go:629] Waited for 196.062831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:32:25.462630  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:32:25.462647  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.462656  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.462662  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.465926  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.662896  135944 request.go:629] Waited for 196.397272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:25.662979  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:32:25.662986  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.662996  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.663008  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.666405  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:25.667062  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:32:25.667079  135944 pod_ready.go:81] duration metric: took 400.657123ms for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:32:25.667092  135944 pod_ready.go:38] duration metric: took 3.200958973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:32:25.667109  135944 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:32:25.667167  135944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:32:25.681427  135944 api_server.go:72] duration metric: took 21.05399667s to wait for apiserver process to appear ...
	I0729 11:32:25.681461  135944 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:32:25.681488  135944 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0729 11:32:25.687357  135944 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0729 11:32:25.687449  135944 round_trippers.go:463] GET https://192.168.39.244:8443/version
	I0729 11:32:25.687460  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.687470  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.687477  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.688358  135944 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 11:32:25.688469  135944 api_server.go:141] control plane version: v1.30.3
	I0729 11:32:25.688494  135944 api_server.go:131] duration metric: took 7.02376ms to wait for apiserver health ...
	I0729 11:32:25.688507  135944 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:32:25.862095  135944 request.go:629] Waited for 173.482481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:25.862163  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:25.862169  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:25.862177  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:25.862184  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:25.867405  135944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 11:32:25.872378  135944 system_pods.go:59] 17 kube-system pods found
	I0729 11:32:25.872417  135944 system_pods.go:61] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:32:25.872422  135944 system_pods.go:61] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:32:25.872426  135944 system_pods.go:61] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:32:25.872430  135944 system_pods.go:61] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:32:25.872433  135944 system_pods.go:61] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:32:25.872437  135944 system_pods.go:61] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:32:25.872440  135944 system_pods.go:61] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:32:25.872443  135944 system_pods.go:61] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:32:25.872446  135944 system_pods.go:61] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:32:25.872451  135944 system_pods.go:61] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:32:25.872454  135944 system_pods.go:61] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:32:25.872457  135944 system_pods.go:61] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:32:25.872460  135944 system_pods.go:61] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:32:25.872463  135944 system_pods.go:61] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:32:25.872465  135944 system_pods.go:61] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:32:25.872468  135944 system_pods.go:61] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:32:25.872472  135944 system_pods.go:61] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:32:25.872478  135944 system_pods.go:74] duration metric: took 183.963171ms to wait for pod list to return data ...
	I0729 11:32:25.872490  135944 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:32:26.062955  135944 request.go:629] Waited for 190.370313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:32:26.063013  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:32:26.063018  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:26.063026  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:26.063031  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:26.066675  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:26.066968  135944 default_sa.go:45] found service account: "default"
	I0729 11:32:26.066988  135944 default_sa.go:55] duration metric: took 194.485878ms for default service account to be created ...
	I0729 11:32:26.067001  135944 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:32:26.262473  135944 request.go:629] Waited for 195.391661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:26.262555  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:32:26.262561  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:26.262572  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:26.262578  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:26.267707  135944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 11:32:26.273665  135944 system_pods.go:86] 17 kube-system pods found
	I0729 11:32:26.273698  135944 system_pods.go:89] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:32:26.273706  135944 system_pods.go:89] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:32:26.273711  135944 system_pods.go:89] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:32:26.273716  135944 system_pods.go:89] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:32:26.273721  135944 system_pods.go:89] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:32:26.273724  135944 system_pods.go:89] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:32:26.273728  135944 system_pods.go:89] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:32:26.273733  135944 system_pods.go:89] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:32:26.273738  135944 system_pods.go:89] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:32:26.273742  135944 system_pods.go:89] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:32:26.273748  135944 system_pods.go:89] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:32:26.273753  135944 system_pods.go:89] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:32:26.273759  135944 system_pods.go:89] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:32:26.273765  135944 system_pods.go:89] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:32:26.273780  135944 system_pods.go:89] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:32:26.273786  135944 system_pods.go:89] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:32:26.273791  135944 system_pods.go:89] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:32:26.273799  135944 system_pods.go:126] duration metric: took 206.788322ms to wait for k8s-apps to be running ...
	I0729 11:32:26.273815  135944 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:32:26.273867  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:32:26.288740  135944 system_svc.go:56] duration metric: took 14.918303ms WaitForService to wait for kubelet
	I0729 11:32:26.288785  135944 kubeadm.go:582] duration metric: took 21.661367602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:32:26.288811  135944 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:32:26.462221  135944 request.go:629] Waited for 173.316729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes
	I0729 11:32:26.462290  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes
	I0729 11:32:26.462297  135944 round_trippers.go:469] Request Headers:
	I0729 11:32:26.462307  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:32:26.462313  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:32:26.466058  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:32:26.466971  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:32:26.466996  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:32:26.467007  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:32:26.467011  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:32:26.467015  135944 node_conditions.go:105] duration metric: took 178.198814ms to run NodePressure ...
	I0729 11:32:26.467027  135944 start.go:241] waiting for startup goroutines ...
	I0729 11:32:26.467050  135944 start.go:255] writing updated cluster config ...
	I0729 11:32:26.469018  135944 out.go:177] 
	I0729 11:32:26.470517  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:26.470619  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:32:26.472356  135944 out.go:177] * Starting "ha-691698-m03" control-plane node in "ha-691698" cluster
	I0729 11:32:26.473685  135944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:32:26.473717  135944 cache.go:56] Caching tarball of preloaded images
	I0729 11:32:26.473840  135944 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:32:26.473852  135944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:32:26.473946  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:32:26.474117  135944 start.go:360] acquireMachinesLock for ha-691698-m03: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:32:26.474157  135944 start.go:364] duration metric: took 21.796µs to acquireMachinesLock for "ha-691698-m03"
	I0729 11:32:26.474179  135944 start.go:93] Provisioning new machine with config: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:32:26.474268  135944 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 11:32:26.475957  135944 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 11:32:26.476052  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:26.476087  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:26.491106  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I0729 11:32:26.491597  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:26.492100  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:26.492120  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:26.492456  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:26.492681  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:26.492858  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:26.493059  135944 start.go:159] libmachine.API.Create for "ha-691698" (driver="kvm2")
	I0729 11:32:26.493093  135944 client.go:168] LocalClient.Create starting
	I0729 11:32:26.493144  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 11:32:26.493178  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:32:26.493193  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:32:26.493240  135944 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 11:32:26.493257  135944 main.go:141] libmachine: Decoding PEM data...
	I0729 11:32:26.493267  135944 main.go:141] libmachine: Parsing certificate...
	I0729 11:32:26.493282  135944 main.go:141] libmachine: Running pre-create checks...
	I0729 11:32:26.493290  135944 main.go:141] libmachine: (ha-691698-m03) Calling .PreCreateCheck
	I0729 11:32:26.493474  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetConfigRaw
	I0729 11:32:26.493862  135944 main.go:141] libmachine: Creating machine...
	I0729 11:32:26.493874  135944 main.go:141] libmachine: (ha-691698-m03) Calling .Create
	I0729 11:32:26.494029  135944 main.go:141] libmachine: (ha-691698-m03) Creating KVM machine...
	I0729 11:32:26.495358  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found existing default KVM network
	I0729 11:32:26.495463  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found existing private KVM network mk-ha-691698
	I0729 11:32:26.495589  135944 main.go:141] libmachine: (ha-691698-m03) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03 ...
	I0729 11:32:26.495614  135944 main.go:141] libmachine: (ha-691698-m03) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 11:32:26.495664  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:26.495569  136723 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:32:26.495788  135944 main.go:141] libmachine: (ha-691698-m03) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 11:32:26.735242  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:26.735091  136723 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa...
	I0729 11:32:27.006279  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:27.006119  136723 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/ha-691698-m03.rawdisk...
	I0729 11:32:27.006308  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Writing magic tar header
	I0729 11:32:27.006324  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Writing SSH key tar header
	I0729 11:32:27.006338  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:27.006234  136723 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03 ...
	I0729 11:32:27.006353  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03
	I0729 11:32:27.006379  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03 (perms=drwx------)
	I0729 11:32:27.006391  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 11:32:27.006405  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:32:27.006414  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 11:32:27.006424  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 11:32:27.006434  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 11:32:27.006445  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 11:32:27.006463  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Checking permissions on dir: /home
	I0729 11:32:27.006480  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Skipping /home - not owner
	I0729 11:32:27.006491  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 11:32:27.006499  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 11:32:27.006504  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 11:32:27.006513  135944 main.go:141] libmachine: (ha-691698-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 11:32:27.006520  135944 main.go:141] libmachine: (ha-691698-m03) Creating domain...
	I0729 11:32:27.007380  135944 main.go:141] libmachine: (ha-691698-m03) define libvirt domain using xml: 
	I0729 11:32:27.007410  135944 main.go:141] libmachine: (ha-691698-m03) <domain type='kvm'>
	I0729 11:32:27.007421  135944 main.go:141] libmachine: (ha-691698-m03)   <name>ha-691698-m03</name>
	I0729 11:32:27.007433  135944 main.go:141] libmachine: (ha-691698-m03)   <memory unit='MiB'>2200</memory>
	I0729 11:32:27.007442  135944 main.go:141] libmachine: (ha-691698-m03)   <vcpu>2</vcpu>
	I0729 11:32:27.007447  135944 main.go:141] libmachine: (ha-691698-m03)   <features>
	I0729 11:32:27.007455  135944 main.go:141] libmachine: (ha-691698-m03)     <acpi/>
	I0729 11:32:27.007460  135944 main.go:141] libmachine: (ha-691698-m03)     <apic/>
	I0729 11:32:27.007467  135944 main.go:141] libmachine: (ha-691698-m03)     <pae/>
	I0729 11:32:27.007471  135944 main.go:141] libmachine: (ha-691698-m03)     
	I0729 11:32:27.007478  135944 main.go:141] libmachine: (ha-691698-m03)   </features>
	I0729 11:32:27.007484  135944 main.go:141] libmachine: (ha-691698-m03)   <cpu mode='host-passthrough'>
	I0729 11:32:27.007489  135944 main.go:141] libmachine: (ha-691698-m03)   
	I0729 11:32:27.007494  135944 main.go:141] libmachine: (ha-691698-m03)   </cpu>
	I0729 11:32:27.007499  135944 main.go:141] libmachine: (ha-691698-m03)   <os>
	I0729 11:32:27.007506  135944 main.go:141] libmachine: (ha-691698-m03)     <type>hvm</type>
	I0729 11:32:27.007538  135944 main.go:141] libmachine: (ha-691698-m03)     <boot dev='cdrom'/>
	I0729 11:32:27.007562  135944 main.go:141] libmachine: (ha-691698-m03)     <boot dev='hd'/>
	I0729 11:32:27.007575  135944 main.go:141] libmachine: (ha-691698-m03)     <bootmenu enable='no'/>
	I0729 11:32:27.007583  135944 main.go:141] libmachine: (ha-691698-m03)   </os>
	I0729 11:32:27.007593  135944 main.go:141] libmachine: (ha-691698-m03)   <devices>
	I0729 11:32:27.007606  135944 main.go:141] libmachine: (ha-691698-m03)     <disk type='file' device='cdrom'>
	I0729 11:32:27.007623  135944 main.go:141] libmachine: (ha-691698-m03)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/boot2docker.iso'/>
	I0729 11:32:27.007639  135944 main.go:141] libmachine: (ha-691698-m03)       <target dev='hdc' bus='scsi'/>
	I0729 11:32:27.007651  135944 main.go:141] libmachine: (ha-691698-m03)       <readonly/>
	I0729 11:32:27.007662  135944 main.go:141] libmachine: (ha-691698-m03)     </disk>
	I0729 11:32:27.007674  135944 main.go:141] libmachine: (ha-691698-m03)     <disk type='file' device='disk'>
	I0729 11:32:27.007687  135944 main.go:141] libmachine: (ha-691698-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 11:32:27.007704  135944 main.go:141] libmachine: (ha-691698-m03)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/ha-691698-m03.rawdisk'/>
	I0729 11:32:27.007720  135944 main.go:141] libmachine: (ha-691698-m03)       <target dev='hda' bus='virtio'/>
	I0729 11:32:27.007732  135944 main.go:141] libmachine: (ha-691698-m03)     </disk>
	I0729 11:32:27.007743  135944 main.go:141] libmachine: (ha-691698-m03)     <interface type='network'>
	I0729 11:32:27.007753  135944 main.go:141] libmachine: (ha-691698-m03)       <source network='mk-ha-691698'/>
	I0729 11:32:27.007768  135944 main.go:141] libmachine: (ha-691698-m03)       <model type='virtio'/>
	I0729 11:32:27.007781  135944 main.go:141] libmachine: (ha-691698-m03)     </interface>
	I0729 11:32:27.007796  135944 main.go:141] libmachine: (ha-691698-m03)     <interface type='network'>
	I0729 11:32:27.007810  135944 main.go:141] libmachine: (ha-691698-m03)       <source network='default'/>
	I0729 11:32:27.007823  135944 main.go:141] libmachine: (ha-691698-m03)       <model type='virtio'/>
	I0729 11:32:27.007839  135944 main.go:141] libmachine: (ha-691698-m03)     </interface>
	I0729 11:32:27.007849  135944 main.go:141] libmachine: (ha-691698-m03)     <serial type='pty'>
	I0729 11:32:27.007859  135944 main.go:141] libmachine: (ha-691698-m03)       <target port='0'/>
	I0729 11:32:27.007873  135944 main.go:141] libmachine: (ha-691698-m03)     </serial>
	I0729 11:32:27.007885  135944 main.go:141] libmachine: (ha-691698-m03)     <console type='pty'>
	I0729 11:32:27.007897  135944 main.go:141] libmachine: (ha-691698-m03)       <target type='serial' port='0'/>
	I0729 11:32:27.007909  135944 main.go:141] libmachine: (ha-691698-m03)     </console>
	I0729 11:32:27.007919  135944 main.go:141] libmachine: (ha-691698-m03)     <rng model='virtio'>
	I0729 11:32:27.007932  135944 main.go:141] libmachine: (ha-691698-m03)       <backend model='random'>/dev/random</backend>
	I0729 11:32:27.007946  135944 main.go:141] libmachine: (ha-691698-m03)     </rng>
	I0729 11:32:27.007957  135944 main.go:141] libmachine: (ha-691698-m03)     
	I0729 11:32:27.007967  135944 main.go:141] libmachine: (ha-691698-m03)     
	I0729 11:32:27.007979  135944 main.go:141] libmachine: (ha-691698-m03)   </devices>
	I0729 11:32:27.007987  135944 main.go:141] libmachine: (ha-691698-m03) </domain>
	I0729 11:32:27.007999  135944 main.go:141] libmachine: (ha-691698-m03) 
	I0729 11:32:27.014811  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:a6:7f:ab in network default
	I0729 11:32:27.015438  135944 main.go:141] libmachine: (ha-691698-m03) Ensuring networks are active...
	I0729 11:32:27.015464  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:27.016179  135944 main.go:141] libmachine: (ha-691698-m03) Ensuring network default is active
	I0729 11:32:27.016502  135944 main.go:141] libmachine: (ha-691698-m03) Ensuring network mk-ha-691698 is active
	I0729 11:32:27.016836  135944 main.go:141] libmachine: (ha-691698-m03) Getting domain xml...
	I0729 11:32:27.017577  135944 main.go:141] libmachine: (ha-691698-m03) Creating domain...
	I0729 11:32:28.250903  135944 main.go:141] libmachine: (ha-691698-m03) Waiting to get IP...
	I0729 11:32:28.251767  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:28.252191  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:28.252218  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:28.252145  136723 retry.go:31] will retry after 253.703332ms: waiting for machine to come up
	I0729 11:32:28.507702  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:28.508150  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:28.508180  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:28.508098  136723 retry.go:31] will retry after 267.484872ms: waiting for machine to come up
	I0729 11:32:28.777566  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:28.777996  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:28.778023  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:28.777948  136723 retry.go:31] will retry after 341.397216ms: waiting for machine to come up
	I0729 11:32:29.120621  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:29.121220  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:29.121246  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:29.121176  136723 retry.go:31] will retry after 608.777311ms: waiting for machine to come up
	I0729 11:32:29.731560  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:29.732037  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:29.732101  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:29.731998  136723 retry.go:31] will retry after 693.26674ms: waiting for machine to come up
	I0729 11:32:30.426477  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:30.426858  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:30.426886  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:30.426825  136723 retry.go:31] will retry after 791.149999ms: waiting for machine to come up
	I0729 11:32:31.219306  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:31.219746  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:31.219778  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:31.219702  136723 retry.go:31] will retry after 904.929817ms: waiting for machine to come up
	I0729 11:32:32.126018  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:32.126502  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:32.126541  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:32.126449  136723 retry.go:31] will retry after 1.220150284s: waiting for machine to come up
	I0729 11:32:33.348801  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:33.349346  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:33.349373  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:33.349288  136723 retry.go:31] will retry after 1.438498563s: waiting for machine to come up
	I0729 11:32:34.789836  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:34.790306  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:34.790335  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:34.790267  136723 retry.go:31] will retry after 1.804831632s: waiting for machine to come up
	I0729 11:32:36.596807  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:36.597242  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:36.597271  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:36.597191  136723 retry.go:31] will retry after 2.583018327s: waiting for machine to come up
	I0729 11:32:39.182967  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:39.183479  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:39.183505  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:39.183431  136723 retry.go:31] will retry after 2.35917847s: waiting for machine to come up
	I0729 11:32:41.543809  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:41.544193  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:41.544216  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:41.544148  136723 retry.go:31] will retry after 3.772141656s: waiting for machine to come up
	I0729 11:32:45.321108  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:45.321512  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find current IP address of domain ha-691698-m03 in network mk-ha-691698
	I0729 11:32:45.321536  135944 main.go:141] libmachine: (ha-691698-m03) DBG | I0729 11:32:45.321469  136723 retry.go:31] will retry after 4.123061195s: waiting for machine to come up
	I0729 11:32:49.447711  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.448117  135944 main.go:141] libmachine: (ha-691698-m03) Found IP for machine: 192.168.39.23
	I0729 11:32:49.448137  135944 main.go:141] libmachine: (ha-691698-m03) Reserving static IP address...
	I0729 11:32:49.448147  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has current primary IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.448538  135944 main.go:141] libmachine: (ha-691698-m03) DBG | unable to find host DHCP lease matching {name: "ha-691698-m03", mac: "52:54:00:67:96:46", ip: "192.168.39.23"} in network mk-ha-691698
	I0729 11:32:49.522499  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Getting to WaitForSSH function...
	I0729 11:32:49.522531  135944 main.go:141] libmachine: (ha-691698-m03) Reserved static IP address: 192.168.39.23
	I0729 11:32:49.522552  135944 main.go:141] libmachine: (ha-691698-m03) Waiting for SSH to be available...
	I0729 11:32:49.524782  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.525242  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.525274  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.525424  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Using SSH client type: external
	I0729 11:32:49.525456  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa (-rw-------)
	I0729 11:32:49.525485  135944 main.go:141] libmachine: (ha-691698-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 11:32:49.525499  135944 main.go:141] libmachine: (ha-691698-m03) DBG | About to run SSH command:
	I0729 11:32:49.525623  135944 main.go:141] libmachine: (ha-691698-m03) DBG | exit 0
	I0729 11:32:49.652774  135944 main.go:141] libmachine: (ha-691698-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 11:32:49.653029  135944 main.go:141] libmachine: (ha-691698-m03) KVM machine creation complete!
	I0729 11:32:49.653317  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetConfigRaw
	I0729 11:32:49.653904  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:49.654108  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:49.654298  135944 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 11:32:49.654313  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:32:49.655656  135944 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 11:32:49.655671  135944 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 11:32:49.655677  135944 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 11:32:49.655683  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:49.658019  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.658500  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.658530  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.658781  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:49.658981  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.659145  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.659296  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:49.659588  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:49.659819  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:49.659831  135944 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 11:32:49.764235  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:32:49.764267  135944 main.go:141] libmachine: Detecting the provisioner...
	I0729 11:32:49.764276  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:49.766999  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.767350  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.767373  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.767587  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:49.767767  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.767950  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.768118  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:49.768286  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:49.768443  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:49.768453  135944 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 11:32:49.873362  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 11:32:49.873422  135944 main.go:141] libmachine: found compatible host: buildroot
	I0729 11:32:49.873429  135944 main.go:141] libmachine: Provisioning with buildroot...
	I0729 11:32:49.873438  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:49.873732  135944 buildroot.go:166] provisioning hostname "ha-691698-m03"
	I0729 11:32:49.873756  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:49.873956  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:49.876382  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.876755  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:49.876781  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:49.876951  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:49.877121  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.877260  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:49.877413  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:49.877597  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:49.877762  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:49.877774  135944 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698-m03 && echo "ha-691698-m03" | sudo tee /etc/hostname
	I0729 11:32:49.998655  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698-m03
	
	I0729 11:32:49.998699  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.001688  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.002124  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.002152  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.002368  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.002574  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.002738  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.002886  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.003045  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:50.003236  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:50.003252  135944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:32:50.117832  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:32:50.117868  135944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:32:50.117890  135944 buildroot.go:174] setting up certificates
	I0729 11:32:50.117911  135944 provision.go:84] configureAuth start
	I0729 11:32:50.117925  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetMachineName
	I0729 11:32:50.118204  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:50.121640  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.122058  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.122097  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.122265  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.124448  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.124802  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.124828  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.125021  135944 provision.go:143] copyHostCerts
	I0729 11:32:50.125052  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:32:50.125085  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:32:50.125094  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:32:50.125158  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:32:50.125227  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:32:50.125244  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:32:50.125250  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:32:50.125272  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:32:50.125358  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:32:50.125378  135944 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:32:50.125382  135944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:32:50.125403  135944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:32:50.125452  135944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698-m03 san=[127.0.0.1 192.168.39.23 ha-691698-m03 localhost minikube]
	I0729 11:32:50.523937  135944 provision.go:177] copyRemoteCerts
	I0729 11:32:50.523997  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:32:50.524022  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.526913  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.527358  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.527384  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.527554  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.527746  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.527948  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.528143  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:50.614253  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:32:50.614328  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 11:32:50.638415  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:32:50.638497  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:32:50.661486  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:32:50.661580  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:32:50.683745  135944 provision.go:87] duration metric: took 565.817341ms to configureAuth
	I0729 11:32:50.683774  135944 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:32:50.684051  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:50.684142  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.686743  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.687151  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.687192  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.687406  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.687636  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.687828  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.687953  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.688070  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:50.688256  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:50.688270  135944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:32:50.949315  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:32:50.949343  135944 main.go:141] libmachine: Checking connection to Docker...
	I0729 11:32:50.949352  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetURL
	I0729 11:32:50.950652  135944 main.go:141] libmachine: (ha-691698-m03) DBG | Using libvirt version 6000000
	I0729 11:32:50.952621  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.952944  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.953002  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.953164  135944 main.go:141] libmachine: Docker is up and running!
	I0729 11:32:50.953182  135944 main.go:141] libmachine: Reticulating splines...
	I0729 11:32:50.953191  135944 client.go:171] duration metric: took 24.460085955s to LocalClient.Create
	I0729 11:32:50.953218  135944 start.go:167] duration metric: took 24.460171474s to libmachine.API.Create "ha-691698"
	I0729 11:32:50.953228  135944 start.go:293] postStartSetup for "ha-691698-m03" (driver="kvm2")
	I0729 11:32:50.953238  135944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:32:50.953264  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:50.953522  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:32:50.953550  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:50.955584  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.955929  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:50.955954  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:50.956152  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:50.956340  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:50.956491  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:50.956628  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:51.038978  135944 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:32:51.043053  135944 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:32:51.043084  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:32:51.043178  135944 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:32:51.043250  135944 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:32:51.043260  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:32:51.043337  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:32:51.052205  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:32:51.074936  135944 start.go:296] duration metric: took 121.692957ms for postStartSetup
	I0729 11:32:51.074982  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetConfigRaw
	I0729 11:32:51.075567  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:51.078071  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.078474  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.078497  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.078731  135944 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:32:51.078948  135944 start.go:128] duration metric: took 24.604669765s to createHost
	I0729 11:32:51.078972  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:51.081210  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.081480  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.081503  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.081634  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:51.081805  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.081962  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.082091  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:51.082250  135944 main.go:141] libmachine: Using SSH client type: native
	I0729 11:32:51.082415  135944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0729 11:32:51.082426  135944 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:32:51.193768  135944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252771.171839295
	
	I0729 11:32:51.193792  135944 fix.go:216] guest clock: 1722252771.171839295
	I0729 11:32:51.193799  135944 fix.go:229] Guest: 2024-07-29 11:32:51.171839295 +0000 UTC Remote: 2024-07-29 11:32:51.078960346 +0000 UTC m=+152.005129423 (delta=92.878949ms)
	I0729 11:32:51.193821  135944 fix.go:200] guest clock delta is within tolerance: 92.878949ms
	I0729 11:32:51.193827  135944 start.go:83] releasing machines lock for "ha-691698-m03", held for 24.719660407s
	I0729 11:32:51.193851  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.194135  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:51.196816  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.197215  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.197257  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.199300  135944 out.go:177] * Found network options:
	I0729 11:32:51.200740  135944 out.go:177]   - NO_PROXY=192.168.39.244,192.168.39.5
	W0729 11:32:51.201894  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 11:32:51.201917  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:32:51.201954  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.202485  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.202701  135944 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:32:51.202816  135944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:32:51.202861  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	W0729 11:32:51.202930  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 11:32:51.202958  135944 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 11:32:51.203024  135944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:32:51.203048  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:32:51.205679  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206115  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.206141  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206160  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206328  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:51.206482  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.206614  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:51.206648  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:51.206672  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:51.206757  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:51.206815  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:32:51.206960  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:32:51.207088  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:32:51.207219  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:32:51.439362  135944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:32:51.445244  135944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:32:51.445322  135944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:32:51.462422  135944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 11:32:51.462454  135944 start.go:495] detecting cgroup driver to use...
	I0729 11:32:51.462531  135944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:32:51.478560  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:32:51.492786  135944 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:32:51.492852  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:32:51.506773  135944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:32:51.519525  135944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:32:51.635696  135944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:32:51.781569  135944 docker.go:233] disabling docker service ...
	I0729 11:32:51.781659  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:32:51.797897  135944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:32:51.812185  135944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:32:51.961731  135944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:32:52.079126  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:32:52.093096  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:32:52.111134  135944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:32:52.111200  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.120915  135944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:32:52.120997  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.130853  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.140645  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.149934  135944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:32:52.159520  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.168747  135944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.184168  135944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:32:52.193278  135944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:32:52.201924  135944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 11:32:52.201981  135944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 11:32:52.213583  135944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:32:52.222229  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:32:52.332876  135944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:32:52.463664  135944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:32:52.463751  135944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:32:52.468089  135944 start.go:563] Will wait 60s for crictl version
	I0729 11:32:52.468152  135944 ssh_runner.go:195] Run: which crictl
	I0729 11:32:52.471589  135944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:32:52.507852  135944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:32:52.507930  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:32:52.537199  135944 ssh_runner.go:195] Run: crio --version
	I0729 11:32:52.564742  135944 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:32:52.566072  135944 out.go:177]   - env NO_PROXY=192.168.39.244
	I0729 11:32:52.567338  135944 out.go:177]   - env NO_PROXY=192.168.39.244,192.168.39.5
	I0729 11:32:52.568500  135944 main.go:141] libmachine: (ha-691698-m03) Calling .GetIP
	I0729 11:32:52.571227  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:52.571511  135944 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:32:52.571539  135944 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:32:52.571772  135944 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:32:52.575623  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:32:52.587039  135944 mustload.go:65] Loading cluster: ha-691698
	I0729 11:32:52.587279  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:32:52.587534  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:52.587579  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:52.602592  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
	I0729 11:32:52.603149  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:52.603576  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:52.603596  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:52.603928  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:52.604117  135944 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:32:52.605606  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:32:52.606003  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:52.606048  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:52.622153  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0729 11:32:52.622543  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:52.623029  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:52.623049  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:52.623325  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:52.623519  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:32:52.623689  135944 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.23
	I0729 11:32:52.623702  135944 certs.go:194] generating shared ca certs ...
	I0729 11:32:52.623722  135944 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:32:52.623871  135944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:32:52.623927  135944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:32:52.623952  135944 certs.go:256] generating profile certs ...
	I0729 11:32:52.624078  135944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:32:52.624110  135944 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf
	I0729 11:32:52.624132  135944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.5 192.168.39.23 192.168.39.254]
	I0729 11:32:52.781549  135944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf ...
	I0729 11:32:52.781603  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf: {Name:mk72a72dfcb0a940636db8277f758a4b89126c0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:32:52.781792  135944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf ...
	I0729 11:32:52.781815  135944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf: {Name:mkbbb0d7426fd151fdc24ad3b481afd03426af32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:32:52.781915  135944 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.405a42bf -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:32:52.782066  135944 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.405a42bf -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:32:52.782228  135944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:32:52.782248  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:32:52.782266  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:32:52.782286  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:32:52.782305  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:32:52.782322  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:32:52.782340  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:32:52.782357  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:32:52.782372  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:32:52.782440  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:32:52.782481  135944 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:32:52.782494  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:32:52.782525  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:32:52.782555  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:32:52.782593  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:32:52.782644  135944 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:32:52.782682  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:52.782703  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:32:52.782720  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:32:52.782765  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:32:52.785986  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:52.786429  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:32:52.786460  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:52.786632  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:32:52.786842  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:32:52.786981  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:32:52.787130  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:32:52.861367  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 11:32:52.865802  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 11:32:52.875801  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 11:32:52.879467  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 11:32:52.889150  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 11:32:52.892914  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 11:32:52.906958  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 11:32:52.911068  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0729 11:32:52.921168  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 11:32:52.924834  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 11:32:52.935394  135944 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 11:32:52.939557  135944 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 11:32:52.949542  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:32:52.973517  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:32:52.996464  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:32:53.019282  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:32:53.041165  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 11:32:53.062393  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:32:53.083986  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:32:53.105902  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:32:53.131173  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:32:53.153435  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:32:53.177365  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:32:53.200703  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 11:32:53.215927  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 11:32:53.231438  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 11:32:53.247107  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0729 11:32:53.262900  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 11:32:53.279050  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 11:32:53.294807  135944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 11:32:53.311716  135944 ssh_runner.go:195] Run: openssl version
	I0729 11:32:53.317354  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:32:53.327563  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:53.331932  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:53.331989  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:32:53.337727  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:32:53.348175  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:32:53.358888  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:32:53.363044  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:32:53.363110  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:32:53.368498  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:32:53.378504  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:32:53.388631  135944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:32:53.392863  135944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:32:53.392921  135944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:32:53.398134  135944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:32:53.408052  135944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:32:53.411580  135944 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:32:53.411628  135944 kubeadm.go:934] updating node {m03 192.168.39.23 8443 v1.30.3 crio true true} ...
	I0729 11:32:53.411738  135944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:32:53.411766  135944 kube-vip.go:115] generating kube-vip config ...
	I0729 11:32:53.411801  135944 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:32:53.427737  135944 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:32:53.427816  135944 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:32:53.427864  135944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:32:53.437078  135944 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 11:32:53.437151  135944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 11:32:53.446286  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 11:32:53.446301  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 11:32:53.446321  135944 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 11:32:53.446327  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:32:53.446364  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:32:53.446405  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 11:32:53.446310  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:32:53.446516  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 11:32:53.464828  135944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:32:53.464874  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 11:32:53.464904  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 11:32:53.464937  135944 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 11:32:53.464909  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 11:32:53.465024  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 11:32:53.491944  135944 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 11:32:53.491998  135944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 11:32:54.351521  135944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 11:32:54.360880  135944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 11:32:54.376607  135944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:32:54.392646  135944 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 11:32:54.408942  135944 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:32:54.412888  135944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:32:54.425220  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:32:54.541043  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:32:54.566818  135944 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:32:54.567219  135944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:32:54.567268  135944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:32:54.584426  135944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0729 11:32:54.584858  135944 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:32:54.585405  135944 main.go:141] libmachine: Using API Version  1
	I0729 11:32:54.585428  135944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:32:54.585858  135944 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:32:54.586104  135944 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:32:54.586275  135944 start.go:317] joinCluster: &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:32:54.586453  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 11:32:54.586481  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:32:54.589134  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:54.589645  135944 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:32:54.589678  135944 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:32:54.589806  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:32:54.589989  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:32:54.590150  135944 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:32:54.590293  135944 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:32:54.738790  135944 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:32:54.738841  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token smpn7k.i9836phgoguqneu8 --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443"
	I0729 11:33:16.724322  135944 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token smpn7k.i9836phgoguqneu8 --discovery-token-ca-cert-hash sha256:b76336cdc1e5832f38dd1fe4d1273d40548edec7e16961a5bdd3e1b68babbbfb --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-691698-m03 --control-plane --apiserver-advertise-address=192.168.39.23 --apiserver-bind-port=8443": (21.985447587s)
	I0729 11:33:16.724369  135944 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 11:33:17.380203  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-691698-m03 minikube.k8s.io/updated_at=2024_07_29T11_33_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53 minikube.k8s.io/name=ha-691698 minikube.k8s.io/primary=false
	I0729 11:33:17.516985  135944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-691698-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 11:33:17.616661  135944 start.go:319] duration metric: took 23.03037939s to joinCluster
	I0729 11:33:17.616763  135944 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 11:33:17.617152  135944 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:33:17.617918  135944 out.go:177] * Verifying Kubernetes components...
	I0729 11:33:17.619414  135944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:33:17.892282  135944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:33:17.975213  135944 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:33:17.975533  135944 kapi.go:59] client config for ha-691698: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 11:33:17.975612  135944 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.244:8443
	I0729 11:33:17.975894  135944 node_ready.go:35] waiting up to 6m0s for node "ha-691698-m03" to be "Ready" ...
	I0729 11:33:17.976023  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:17.976037  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:17.976048  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:17.976052  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:17.979508  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:18.476527  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:18.476555  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:18.476567  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:18.476572  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:18.481146  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:18.976451  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:18.976476  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:18.976487  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:18.976493  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:18.979912  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:19.476610  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:19.476632  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:19.476640  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:19.476644  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:19.479619  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:19.977118  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:19.977145  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:19.977156  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:19.977166  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:19.980385  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:19.981052  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:20.476401  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:20.476428  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:20.476439  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:20.476444  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:20.481123  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:20.976819  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:20.976852  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:20.976864  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:20.976870  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:20.980534  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:21.476742  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:21.476763  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:21.476770  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:21.476773  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:21.479679  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:21.976506  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:21.976529  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:21.976538  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:21.976542  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:21.979639  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:22.476482  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:22.476513  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:22.476526  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:22.476540  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:22.480168  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:22.480930  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:22.976210  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:22.976235  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:22.976246  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:22.976251  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:22.979890  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:23.476856  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:23.476889  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:23.476899  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:23.476904  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:23.480180  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:23.976936  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:23.976969  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:23.976979  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:23.976985  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:23.980232  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:24.476901  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:24.476931  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:24.476942  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:24.476948  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:24.480844  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:24.481620  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:24.976755  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:24.976777  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:24.976788  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:24.976795  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:24.979955  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:25.477012  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:25.477036  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:25.477044  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:25.477048  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:25.480644  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:25.977108  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:25.977140  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:25.977149  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:25.977152  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:25.980509  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:26.477158  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:26.477182  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:26.477193  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:26.477198  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:26.480819  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:26.976541  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:26.976563  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:26.976571  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:26.976575  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:26.980063  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:26.980515  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:27.476900  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:27.476924  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:27.476932  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:27.476937  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:27.480865  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:27.976715  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:27.976744  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:27.976756  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:27.976760  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:27.979990  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:28.477061  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:28.477089  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:28.477101  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:28.477106  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:28.480424  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:28.977002  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:28.977026  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:28.977035  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:28.977041  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:28.980428  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:28.980984  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:29.476284  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:29.476311  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:29.476323  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:29.476331  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:29.479745  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:29.977013  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:29.977037  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:29.977046  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:29.977051  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:29.980809  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:30.477161  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:30.477199  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:30.477211  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:30.477215  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:30.483350  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:30.976863  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:30.976894  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:30.976905  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:30.976909  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:30.980365  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:31.476139  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:31.476163  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:31.476172  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:31.476175  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:31.479329  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:31.479918  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:31.976198  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:31.976221  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:31.976230  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:31.976234  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:31.979984  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:32.477101  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:32.477126  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:32.477134  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:32.477138  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:32.480542  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:32.976396  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:32.976431  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:32.976444  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:32.976448  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:32.980223  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:33.476113  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:33.476135  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:33.476141  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:33.476145  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:33.479452  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:33.480127  135944 node_ready.go:53] node "ha-691698-m03" has status "Ready":"False"
	I0729 11:33:33.976473  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:33.976500  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:33.976514  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:33.976520  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:33.981126  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:34.476861  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:34.476889  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:34.476898  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:34.476903  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:34.480752  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:34.976558  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:34.976582  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:34.976591  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:34.976595  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:34.980049  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.476564  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:35.476596  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.476608  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.476613  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.480040  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.480595  135944 node_ready.go:49] node "ha-691698-m03" has status "Ready":"True"
	I0729 11:33:35.480615  135944 node_ready.go:38] duration metric: took 17.504704382s for node "ha-691698-m03" to be "Ready" ...
	I0729 11:33:35.480623  135944 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:33:35.480698  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:35.480708  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.480716  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.480719  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.487227  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:35.493265  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.493379  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p7zbj
	I0729 11:33:35.493390  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.493401  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.493409  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.496712  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.497384  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:35.497404  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.497414  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.497420  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.500373  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.501076  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.501103  135944 pod_ready.go:81] duration metric: took 7.806838ms for pod "coredns-7db6d8ff4d-p7zbj" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.501117  135944 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.501209  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r48d8
	I0729 11:33:35.501224  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.501235  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.501241  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.504996  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.505767  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:35.505781  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.505789  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.505793  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.508804  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.509353  135944 pod_ready.go:92] pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.509376  135944 pod_ready.go:81] duration metric: took 8.248373ms for pod "coredns-7db6d8ff4d-r48d8" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.509386  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.509443  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698
	I0729 11:33:35.509450  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.509457  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.509461  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.512285  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.512806  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:35.512821  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.512827  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.512833  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.515362  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.515876  135944 pod_ready.go:92] pod "etcd-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.515893  135944 pod_ready.go:81] duration metric: took 6.500912ms for pod "etcd-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.515901  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.515955  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m02
	I0729 11:33:35.515962  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.515969  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.515972  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.519214  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.519869  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:35.519884  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.519890  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.519895  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.522694  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:35.523140  135944 pod_ready.go:92] pod "etcd-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.523157  135944 pod_ready.go:81] duration metric: took 7.249375ms for pod "etcd-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.523167  135944 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.677590  135944 request.go:629] Waited for 154.323479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m03
	I0729 11:33:35.677661  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/etcd-ha-691698-m03
	I0729 11:33:35.677669  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.677682  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.677691  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.681210  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.877397  135944 request.go:629] Waited for 195.290674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:35.877485  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:35.877493  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:35.877499  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:35.877506  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:35.881156  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:35.881836  135944 pod_ready.go:92] pod "etcd-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:35.881857  135944 pod_ready.go:81] duration metric: took 358.684511ms for pod "etcd-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:35.881872  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.076974  135944 request.go:629] Waited for 195.006786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:33:36.077035  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698
	I0729 11:33:36.077040  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.077048  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.077051  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.080790  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:36.276792  135944 request.go:629] Waited for 195.282686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:36.276864  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:36.276870  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.276878  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.276883  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.280177  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:36.280821  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:36.280840  135944 pod_ready.go:81] duration metric: took 398.960323ms for pod "kube-apiserver-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.280850  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.476985  135944 request.go:629] Waited for 196.028731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:33:36.477053  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m02
	I0729 11:33:36.477058  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.477066  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.477071  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.479999  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:36.677027  135944 request.go:629] Waited for 196.44866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:36.677102  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:36.677108  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.677116  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.677121  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.680311  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:36.681057  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:36.681086  135944 pod_ready.go:81] duration metric: took 400.229128ms for pod "kube-apiserver-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.681101  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:36.877420  135944 request.go:629] Waited for 196.223189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m03
	I0729 11:33:36.877492  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-691698-m03
	I0729 11:33:36.877497  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:36.877505  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:36.877512  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:36.881213  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.077177  135944 request.go:629] Waited for 195.243655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:37.077241  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:37.077248  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.077260  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.077268  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.080628  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.081197  135944 pod_ready.go:92] pod "kube-apiserver-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:37.081219  135944 pod_ready.go:81] duration metric: took 400.111392ms for pod "kube-apiserver-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.081231  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.277319  135944 request.go:629] Waited for 195.994566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:33:37.277380  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698
	I0729 11:33:37.277385  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.277391  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.277396  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.280777  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.477079  135944 request.go:629] Waited for 195.383768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:37.477158  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:37.477166  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.477184  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.477193  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.480746  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.481697  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:37.481717  135944 pod_ready.go:81] duration metric: took 400.479808ms for pod "kube-controller-manager-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.481728  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.676866  135944 request.go:629] Waited for 195.051558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:33:37.676954  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m02
	I0729 11:33:37.676977  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.676988  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.676999  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.680200  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.877421  135944 request.go:629] Waited for 196.36184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:37.877483  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:37.877489  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:37.877499  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:37.877505  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:37.880784  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:37.881356  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:37.881375  135944 pod_ready.go:81] duration metric: took 399.640955ms for pod "kube-controller-manager-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:37.881388  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.077575  135944 request.go:629] Waited for 196.085992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m03
	I0729 11:33:38.077638  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-691698-m03
	I0729 11:33:38.077643  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.077651  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.077656  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.081142  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.277262  135944 request.go:629] Waited for 195.361703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:38.277355  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:38.277362  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.277372  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.277381  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.280412  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.280986  135944 pod_ready.go:92] pod "kube-controller-manager-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:38.281007  135944 pod_ready.go:81] duration metric: took 399.608004ms for pod "kube-controller-manager-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.281017  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.477188  135944 request.go:629] Waited for 196.091424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:33:38.477250  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hn2s
	I0729 11:33:38.477255  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.477263  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.477267  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.480865  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.676922  135944 request.go:629] Waited for 195.370243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:38.677028  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:38.677036  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.677047  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.677054  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.680314  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:38.680910  135944 pod_ready.go:92] pod "kube-proxy-5hn2s" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:38.680933  135944 pod_ready.go:81] duration metric: took 399.909196ms for pod "kube-proxy-5hn2s" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.680947  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:38.877079  135944 request.go:629] Waited for 196.014421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:33:38.877141  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8p4nc
	I0729 11:33:38.877146  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:38.877155  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:38.877159  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:38.880723  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.076665  135944 request.go:629] Waited for 195.263999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:39.076724  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:39.076729  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.076737  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.076741  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.080321  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.081129  135944 pod_ready.go:92] pod "kube-proxy-8p4nc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:39.081150  135944 pod_ready.go:81] duration metric: took 400.191431ms for pod "kube-proxy-8p4nc" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.081163  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vd69n" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.277071  135944 request.go:629] Waited for 195.822792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vd69n
	I0729 11:33:39.277155  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vd69n
	I0729 11:33:39.277163  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.277172  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.277178  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.280065  135944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 11:33:39.476952  135944 request.go:629] Waited for 196.215506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:39.477039  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:39.477048  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.477055  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.477062  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.480471  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.481302  135944 pod_ready.go:92] pod "kube-proxy-vd69n" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:39.481328  135944 pod_ready.go:81] duration metric: took 400.156619ms for pod "kube-proxy-vd69n" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.481340  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.676662  135944 request.go:629] Waited for 195.245723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:33:39.676727  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698
	I0729 11:33:39.676734  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.676744  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.676752  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.680109  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.877044  135944 request.go:629] Waited for 196.377501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:39.877125  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698
	I0729 11:33:39.877134  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:39.877148  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:39.877158  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:39.880646  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:39.881175  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:39.881195  135944 pod_ready.go:81] duration metric: took 399.847201ms for pod "kube-scheduler-ha-691698" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:39.881208  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.077415  135944 request.go:629] Waited for 196.12709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:33:40.077490  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m02
	I0729 11:33:40.077495  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.077504  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.077509  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.082182  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:40.277261  135944 request.go:629] Waited for 194.338625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:40.277316  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m02
	I0729 11:33:40.277321  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.277332  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.277337  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.280480  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:40.281181  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:40.281201  135944 pod_ready.go:81] duration metric: took 399.985434ms for pod "kube-scheduler-ha-691698-m02" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.281211  135944 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.477244  135944 request.go:629] Waited for 195.927385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m03
	I0729 11:33:40.477327  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-691698-m03
	I0729 11:33:40.477340  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.477351  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.477358  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.482353  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:40.677394  135944 request.go:629] Waited for 194.413012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:40.677474  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes/ha-691698-m03
	I0729 11:33:40.677481  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.677491  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.677496  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.681641  135944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 11:33:40.682503  135944 pod_ready.go:92] pod "kube-scheduler-ha-691698-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 11:33:40.682528  135944 pod_ready.go:81] duration metric: took 401.308999ms for pod "kube-scheduler-ha-691698-m03" in "kube-system" namespace to be "Ready" ...
	I0729 11:33:40.682541  135944 pod_ready.go:38] duration metric: took 5.20190254s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:33:40.682558  135944 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:33:40.682613  135944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:33:40.698690  135944 api_server.go:72] duration metric: took 23.081883659s to wait for apiserver process to appear ...
	I0729 11:33:40.698720  135944 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:33:40.698744  135944 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0729 11:33:40.703766  135944 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0729 11:33:40.703848  135944 round_trippers.go:463] GET https://192.168.39.244:8443/version
	I0729 11:33:40.703856  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.703864  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.703870  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.705000  135944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 11:33:40.705070  135944 api_server.go:141] control plane version: v1.30.3
	I0729 11:33:40.705087  135944 api_server.go:131] duration metric: took 6.35952ms to wait for apiserver health ...
	I0729 11:33:40.705095  135944 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 11:33:40.876860  135944 request.go:629] Waited for 171.677496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:40.876948  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:40.876956  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:40.876986  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:40.876994  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:40.883957  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:40.890456  135944 system_pods.go:59] 24 kube-system pods found
	I0729 11:33:40.890494  135944 system_pods.go:61] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:33:40.890501  135944 system_pods.go:61] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:33:40.890506  135944 system_pods.go:61] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:33:40.890512  135944 system_pods.go:61] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:33:40.890517  135944 system_pods.go:61] "etcd-ha-691698-m03" [b8bce546-d13c-4402-b1d4-d2f0d00aba09] Running
	I0729 11:33:40.890521  135944 system_pods.go:61] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:33:40.890526  135944 system_pods.go:61] "kindnet-n929l" [02c92d04-dd42-46c2-9033-5306d7490e0f] Running
	I0729 11:33:40.890530  135944 system_pods.go:61] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:33:40.890535  135944 system_pods.go:61] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:33:40.890546  135944 system_pods.go:61] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:33:40.890556  135944 system_pods.go:61] "kube-apiserver-ha-691698-m03" [66ea3cca-4a77-4756-855a-b34c2e420ca7] Running
	I0729 11:33:40.890561  135944 system_pods.go:61] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:33:40.890565  135944 system_pods.go:61] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:33:40.890572  135944 system_pods.go:61] "kube-controller-manager-ha-691698-m03" [a0a8f594-4b59-4601-958e-fd524fde33ee] Running
	I0729 11:33:40.890575  135944 system_pods.go:61] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:33:40.890581  135944 system_pods.go:61] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:33:40.890584  135944 system_pods.go:61] "kube-proxy-vd69n" [596d3835-5ab1-4009-a1d3-ccde26b14f32] Running
	I0729 11:33:40.890589  135944 system_pods.go:61] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:33:40.890592  135944 system_pods.go:61] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:33:40.890597  135944 system_pods.go:61] "kube-scheduler-ha-691698-m03" [6519ce66-a98b-4d83-8e81-f1e35896ebdb] Running
	I0729 11:33:40.890601  135944 system_pods.go:61] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:33:40.890604  135944 system_pods.go:61] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:33:40.890607  135944 system_pods.go:61] "kube-vip-ha-691698-m03" [0648712d-e530-460f-b39a-c8a61229587f] Running
	I0729 11:33:40.890611  135944 system_pods.go:61] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:33:40.890620  135944 system_pods.go:74] duration metric: took 185.512939ms to wait for pod list to return data ...
	I0729 11:33:40.890630  135944 default_sa.go:34] waiting for default service account to be created ...
	I0729 11:33:41.077052  135944 request.go:629] Waited for 186.33972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:33:41.077128  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/default/serviceaccounts
	I0729 11:33:41.077136  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:41.077147  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:41.077157  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:41.080477  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:41.080599  135944 default_sa.go:45] found service account: "default"
	I0729 11:33:41.080613  135944 default_sa.go:55] duration metric: took 189.975552ms for default service account to be created ...
	I0729 11:33:41.080621  135944 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 11:33:41.277084  135944 request.go:629] Waited for 196.39084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:41.277169  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/namespaces/kube-system/pods
	I0729 11:33:41.277178  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:41.277186  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:41.277193  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:41.283853  135944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 11:33:41.290170  135944 system_pods.go:86] 24 kube-system pods found
	I0729 11:33:41.290199  135944 system_pods.go:89] "coredns-7db6d8ff4d-p7zbj" [7b85aaa0-2ae6-4883-b4e1-8e8af1eea933] Running
	I0729 11:33:41.290205  135944 system_pods.go:89] "coredns-7db6d8ff4d-r48d8" [4d0329d8-26c1-49e5-8af9-8ecda56993ca] Running
	I0729 11:33:41.290210  135944 system_pods.go:89] "etcd-ha-691698" [0ee49cc2-19a3-4c80-bd79-460cc88206ee] Running
	I0729 11:33:41.290214  135944 system_pods.go:89] "etcd-ha-691698-m02" [1b8d5662-c834-47b7-a129-820e1f0a7883] Running
	I0729 11:33:41.290218  135944 system_pods.go:89] "etcd-ha-691698-m03" [b8bce546-d13c-4402-b1d4-d2f0d00aba09] Running
	I0729 11:33:41.290222  135944 system_pods.go:89] "kindnet-gl972" [caf4ea26-7d7a-419f-9493-67639c78ed1d] Running
	I0729 11:33:41.290225  135944 system_pods.go:89] "kindnet-n929l" [02c92d04-dd42-46c2-9033-5306d7490e0f] Running
	I0729 11:33:41.290229  135944 system_pods.go:89] "kindnet-wrx27" [6623ec79-af43-4486-bd89-65e8692e920c] Running
	I0729 11:33:41.290233  135944 system_pods.go:89] "kube-apiserver-ha-691698" [ad0e6226-1f3a-4d3f-a81d-c572dc307e90] Running
	I0729 11:33:41.290236  135944 system_pods.go:89] "kube-apiserver-ha-691698-m02" [03c7a68e-a0df-4d22-a96d-c08d4a6099dd] Running
	I0729 11:33:41.290240  135944 system_pods.go:89] "kube-apiserver-ha-691698-m03" [66ea3cca-4a77-4756-855a-b34c2e420ca7] Running
	I0729 11:33:41.290244  135944 system_pods.go:89] "kube-controller-manager-ha-691698" [33507788-a0ea-4f85-98b8-670617e63b2e] Running
	I0729 11:33:41.290248  135944 system_pods.go:89] "kube-controller-manager-ha-691698-m02" [be300341-bb85-4c72-b66a-f1a5c280e48c] Running
	I0729 11:33:41.290253  135944 system_pods.go:89] "kube-controller-manager-ha-691698-m03" [a0a8f594-4b59-4601-958e-fd524fde33ee] Running
	I0729 11:33:41.290257  135944 system_pods.go:89] "kube-proxy-5hn2s" [b73c788f-9f8d-421e-b967-89b9154ea946] Running
	I0729 11:33:41.290264  135944 system_pods.go:89] "kube-proxy-8p4nc" [c20bd4bc-8fca-437d-854e-b01b594f32f4] Running
	I0729 11:33:41.290268  135944 system_pods.go:89] "kube-proxy-vd69n" [596d3835-5ab1-4009-a1d3-ccde26b14f32] Running
	I0729 11:33:41.290274  135944 system_pods.go:89] "kube-scheduler-ha-691698" [c6a21e51-28c0-41d2-b1a1-30bb1ad4e979] Running
	I0729 11:33:41.290278  135944 system_pods.go:89] "kube-scheduler-ha-691698-m02" [65d29208-4055-4da5-b612-454ef28c5c0e] Running
	I0729 11:33:41.290284  135944 system_pods.go:89] "kube-scheduler-ha-691698-m03" [6519ce66-a98b-4d83-8e81-f1e35896ebdb] Running
	I0729 11:33:41.290288  135944 system_pods.go:89] "kube-vip-ha-691698" [1b5b8d68-2923-4dc5-bcf1-492593eb2d51] Running
	I0729 11:33:41.290293  135944 system_pods.go:89] "kube-vip-ha-691698-m02" [8a2d8ba0-dc4e-4831-b9f2-31c18b9edc91] Running
	I0729 11:33:41.290297  135944 system_pods.go:89] "kube-vip-ha-691698-m03" [0648712d-e530-460f-b39a-c8a61229587f] Running
	I0729 11:33:41.290300  135944 system_pods.go:89] "storage-provisioner" [694c60e1-9d4e-4fea-96e6-21554bbf1aaa] Running
	I0729 11:33:41.290310  135944 system_pods.go:126] duration metric: took 209.683049ms to wait for k8s-apps to be running ...
	I0729 11:33:41.290320  135944 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 11:33:41.290363  135944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:33:41.304363  135944 system_svc.go:56] duration metric: took 14.026145ms WaitForService to wait for kubelet
	I0729 11:33:41.304397  135944 kubeadm.go:582] duration metric: took 23.687596649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:33:41.304421  135944 node_conditions.go:102] verifying NodePressure condition ...
	I0729 11:33:41.476895  135944 request.go:629] Waited for 172.363425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.244:8443/api/v1/nodes
	I0729 11:33:41.477022  135944 round_trippers.go:463] GET https://192.168.39.244:8443/api/v1/nodes
	I0729 11:33:41.477033  135944 round_trippers.go:469] Request Headers:
	I0729 11:33:41.477041  135944 round_trippers.go:473]     Accept: application/json, */*
	I0729 11:33:41.477046  135944 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 11:33:41.480521  135944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 11:33:41.481556  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:33:41.481581  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:33:41.481595  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:33:41.481600  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:33:41.481605  135944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 11:33:41.481610  135944 node_conditions.go:123] node cpu capacity is 2
	I0729 11:33:41.481615  135944 node_conditions.go:105] duration metric: took 177.187937ms to run NodePressure ...
	I0729 11:33:41.481631  135944 start.go:241] waiting for startup goroutines ...
	I0729 11:33:41.481660  135944 start.go:255] writing updated cluster config ...
	I0729 11:33:41.481964  135944 ssh_runner.go:195] Run: rm -f paused
	I0729 11:33:41.532361  135944 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 11:33:41.534444  135944 out.go:177] * Done! kubectl is now configured to use "ha-691698" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.955349165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253100955232908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf9f7f19-88de-4fb1-9078-72b3b8815fda name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.957408622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5927f72-8359-434b-8ea6-45d408a2c88f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.957465432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5927f72-8359-434b-8ea6-45d408a2c88f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.957762401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5927f72-8359-434b-8ea6-45d408a2c88f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.994959355Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25250047-0283-41fb-902f-78b6cda29f02 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.995034451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25250047-0283-41fb-902f-78b6cda29f02 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.997945705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54fe9072-f8b2-4409-bcc9-32811cef07be name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.998383195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253100998360783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54fe9072-f8b2-4409-bcc9-32811cef07be name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.998949637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a38d6d4-13fb-4dd1-8a17-164c1e24d074 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.999006727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a38d6d4-13fb-4dd1-8a17-164c1e24d074 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:20 ha-691698 crio[683]: time="2024-07-29 11:38:20.999245743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a38d6d4-13fb-4dd1-8a17-164c1e24d074 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.037100605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02a26149-a7bf-4b97-a7da-f4ed1ea0b9bd name=/runtime.v1.RuntimeService/Version
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.037193532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02a26149-a7bf-4b97-a7da-f4ed1ea0b9bd name=/runtime.v1.RuntimeService/Version
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.038337520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5437a460-6d18-4d56-a79e-3d230b5d1ee1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.038880419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253101038854706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5437a460-6d18-4d56-a79e-3d230b5d1ee1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.039338390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc15a6be-dc39-4c82-bd1f-53b0ff6f3b46 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.039404324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc15a6be-dc39-4c82-bd1f-53b0ff6f3b46 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.039650533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc15a6be-dc39-4c82-bd1f-53b0ff6f3b46 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.076327573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1e8f991-3188-4275-af3a-713691e6f8d0 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.076496960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1e8f991-3188-4275-af3a-713691e6f8d0 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.077892177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eae855f7-d5a7-40c5-8285-c9b5d1cca282 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.078350774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253101078323608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eae855f7-d5a7-40c5-8285-c9b5d1cca282 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.078965468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86818da2-209b-4cb6-a125-d9dfad6af920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.079070384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86818da2-209b-4cb6-a125-d9dfad6af920 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:38:21 ha-691698 crio[683]: time="2024-07-29 11:38:21.079360401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722252826210888442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690309253129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722252690266805544,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47dc452e397f7cb1c57946f2402ede9ae1f47684f951d810ff42eb0164dea598,PodSandboxId:0f5ab4507eb64364350ef70ec120c02b051864b31f2044c38b874890a87052f6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722252690251771556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722252678490935956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172225267
5058374473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c8d491a07d0625279f0e3cbe3dfd94002b73f769b6793807b1a8c8214ee4b3,PodSandboxId:03d80866866230611d1c07b9122ace20a754a9f093ed5194cfac1c8709428dcb,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222526579
35455041,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29c29f353fca01ed6b9c8c929d7cebfa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722252655364665094,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6,PodSandboxId:cd880d0b141102f69af0648a41c5c535329ef0c15ad813d4b22fd35e4872208e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722252655329457442,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d,PodSandboxId:dba80440eb6efc99f5ed13c10aa1ac0608dd016240ee611fb6e21c77fb5a3641,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722252655261275382,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722252655267576655,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86818da2-209b-4cb6-a125-d9dfad6af920 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	238fb47cd6e36       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   764f56dfda80f       busybox-fc5497c4f-t69zw
	0d819119d1f04       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   d32f436d019c4       coredns-7db6d8ff4d-r48d8
	833566290ab18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   8d892f55e419c       coredns-7db6d8ff4d-p7zbj
	47dc452e397f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   0f5ab4507eb64       storage-provisioner
	2c476db3ff154       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   ff04fbe0e7040       kindnet-gl972
	2da9ca3c5237b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   7978ad5ef51fb       kube-proxy-5hn2s
	53c8d491a07d0       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   03d8086686623       kube-vip-ha-691698
	24326f59696b1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   f7a6dae3abd7e       kube-scheduler-ha-691698
	2c63f4ac92339       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   cd880d0b14110       kube-apiserver-ha-691698
	1d0e28e4eb5d8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   476f4c4be9581       etcd-ha-691698
	0b984e1e87ad3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   dba80440eb6ef       kube-controller-manager-ha-691698
	
	
	==> coredns [0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53] <==
	[INFO] 10.244.2.2:58368 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000428526s
	[INFO] 10.244.0.4:60406 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00012543s
	[INFO] 10.244.0.4:50254 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000059389s
	[INFO] 10.244.0.4:48812 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00188043s
	[INFO] 10.244.1.2:43643 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173662s
	[INFO] 10.244.1.2:52260 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003470125s
	[INFO] 10.244.1.2:54673 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136747s
	[INFO] 10.244.2.2:34318 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000273221s
	[INFO] 10.244.2.2:60262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476515s
	[INFO] 10.244.2.2:57052 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142747s
	[INFO] 10.244.2.2:54120 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108997s
	[INFO] 10.244.1.2:44298 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081482s
	[INFO] 10.244.1.2:57785 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116033s
	[INFO] 10.244.2.2:38389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154869s
	[INFO] 10.244.2.2:33473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139061s
	[INFO] 10.244.2.2:36153 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064585s
	[INFO] 10.244.0.4:36379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097216s
	[INFO] 10.244.0.4:47834 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063726s
	[INFO] 10.244.1.2:33111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120166s
	[INFO] 10.244.2.2:43983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122897s
	[INFO] 10.244.2.2:35012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148813s
	[INFO] 10.244.2.2:40714 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011869s
	[INFO] 10.244.0.4:44215 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086794s
	[INFO] 10.244.0.4:38040 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005703s
	[INFO] 10.244.0.4:50677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108307s
	
	
	==> coredns [833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486] <==
	[INFO] 10.244.1.2:56075 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163005s
	[INFO] 10.244.1.2:34415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116898s
	[INFO] 10.244.1.2:36747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189708s
	[INFO] 10.244.2.2:38790 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020996s
	[INFO] 10.244.2.2:56602 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001921401s
	[INFO] 10.244.2.2:34056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219216s
	[INFO] 10.244.2.2:60410 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161507s
	[INFO] 10.244.0.4:59522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147092s
	[INFO] 10.244.0.4:33605 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742361s
	[INFO] 10.244.0.4:54567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076754s
	[INFO] 10.244.0.4:35616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072926s
	[INFO] 10.244.0.4:50762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001270357s
	[INFO] 10.244.0.4:56719 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059193s
	[INFO] 10.244.0.4:42114 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124091s
	[INFO] 10.244.0.4:54680 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047725s
	[INFO] 10.244.1.2:33443 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111093s
	[INFO] 10.244.1.2:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102839s
	[INFO] 10.244.2.2:47142 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084964s
	[INFO] 10.244.0.4:35741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015832s
	[INFO] 10.244.0.4:39817 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103529s
	[INFO] 10.244.1.2:45931 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134869s
	[INFO] 10.244.1.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000217632s
	[INFO] 10.244.1.2:59273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107311s
	[INFO] 10.244.2.2:49049 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000205027s
	[INFO] 10.244.0.4:42280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127437s
	
	
	==> describe nodes <==
	Name:               ha-691698
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_31_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:30:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:38:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:34:05 +0000   Mon, 29 Jul 2024 11:31:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-691698
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ffcbde1a62f4ed28ef2171c0da37339
	  System UUID:                8ffcbde1-a62f-4ed2-8ef2-171c0da37339
	  Boot ID:                    f8eb0442-fda7-4803-ab40-821f5c33cb8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t69zw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 coredns-7db6d8ff4d-p7zbj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 coredns-7db6d8ff4d-r48d8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 etcd-ha-691698                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m20s
	  kube-system                 kindnet-gl972                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m7s
	  kube-system                 kube-apiserver-ha-691698             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-ha-691698    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-proxy-5hn2s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 kube-scheduler-ha-691698             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-vip-ha-691698                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m5s   kube-proxy       
	  Normal  Starting                 7m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m20s  kubelet          Node ha-691698 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s  kubelet          Node ha-691698 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s  kubelet          Node ha-691698 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s   node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal  NodeReady                6m52s  kubelet          Node ha-691698 status is now: NodeReady
	  Normal  RegisteredNode           6m2s   node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal  RegisteredNode           4m50s  node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	
	
	Name:               ha-691698-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_32_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:32:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:34:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 11:34:04 +0000   Mon, 29 Jul 2024 11:35:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-691698-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c019d6e64b644eff86b333652cd5328b
	  System UUID:                c019d6e6-4b64-4eff-86b3-33652cd5328b
	  Boot ID:                    ffc361c1-a45a-45ad-9852-96429352504d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22qb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-691698-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m18s
	  kube-system                 kindnet-wrx27                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m20s
	  kube-system                 kube-apiserver-ha-691698-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-controller-manager-ha-691698-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-8p4nc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-scheduler-ha-691698-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-vip-ha-691698-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node ha-691698-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x7 over 6m20s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           4m50s                  node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  NodeNotReady             2m44s                  node-controller  Node ha-691698-m02 status is now: NodeNotReady
	
	
	Name:               ha-691698-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_33_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:33:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:38:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:34:15 +0000   Mon, 29 Jul 2024 11:33:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-691698-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc0ebb3b7dee46c2bbb6e4b87cde5294
	  System UUID:                dc0ebb3b-7dee-46c2-bbb6-e4b87cde5294
	  Boot ID:                    793cbd49-8fb8-4fa0-9374-8327f823ecfb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-72n5l                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 etcd-ha-691698-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m5s
	  kube-system                 kindnet-n929l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-ha-691698-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-ha-691698-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-vd69n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-ha-691698-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-vip-ha-691698-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m2s                 kube-proxy       
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node ha-691698-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal  RegisteredNode           4m50s                node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	
	
	Name:               ha-691698-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_34_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:34:50 +0000   Mon, 29 Jul 2024 11:34:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-691698-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 acedffa77bf44161b125b5360bc5ba83
	  System UUID:                acedffa7-7bf4-4161-b125-b5360bc5ba83
	  Boot ID:                    e24b0a1a-2dbd-4235-9799-fdae94d4486d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pknpn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m1s
	  kube-system                 kube-proxy-9k2mb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m2s)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m2s)  kubelet          Node ha-691698-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m2s)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal  NodeReady                3m42s                kubelet          Node ha-691698-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 11:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048983] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036911] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696256] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.842909] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.530183] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.170622] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.056672] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055838] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.156855] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147139] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.275583] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.097124] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +4.229544] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063086] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 11:31] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.086846] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.595904] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.192166] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 11:32] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d] <==
	{"level":"warn","ts":"2024-07-29T11:38:21.325456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.329091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.336097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.339623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.349162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.355964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.365053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.369373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.373839Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.382586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.383727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.391624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.399606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.403781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.407412Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.415651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.421318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.424781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.427479Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.43139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.434989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.440204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.44924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.456218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T11:38:21.511519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38b93d7e943acb5d","from":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:38:21 up 7 min,  0 users,  load average: 0.41, 0.25, 0.13
	Linux ha-691698 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756] <==
	I0729 11:37:49.467072       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:37:59.465402       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:37:59.465444       1 main.go:299] handling current node
	I0729 11:37:59.465459       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:37:59.465465       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:37:59.465589       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:37:59.465609       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:37:59.465658       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:37:59.465716       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:38:09.465917       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:38:09.466022       1 main.go:299] handling current node
	I0729 11:38:09.466049       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:38:09.466067       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:38:09.466197       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:38:09.466233       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:38:09.466304       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:38:09.466323       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:38:19.457201       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:38:19.457325       1 main.go:299] handling current node
	I0729 11:38:19.457356       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:38:19.457375       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:38:19.457533       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:38:19.457577       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:38:19.457651       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:38:19.457742       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2c63f4ac923395e3c4f21210b98f155c47ba02f4a51916c9b755155f96154ac6] <==
	I0729 11:31:00.064622       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0729 11:31:00.071166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244]
	I0729 11:31:00.072238       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:31:00.076602       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:31:00.191035       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:31:01.601915       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:31:01.619848       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 11:31:01.633628       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:31:13.500488       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0729 11:31:14.498332       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0729 11:33:47.806521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51682: use of closed network connection
	E0729 11:33:47.990492       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51688: use of closed network connection
	E0729 11:33:48.362479       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51722: use of closed network connection
	E0729 11:33:48.544790       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51740: use of closed network connection
	E0729 11:33:48.726572       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51764: use of closed network connection
	E0729 11:33:48.910453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51782: use of closed network connection
	E0729 11:33:49.094351       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51798: use of closed network connection
	E0729 11:33:49.275269       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51818: use of closed network connection
	E0729 11:33:49.565230       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51850: use of closed network connection
	E0729 11:33:49.753170       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51880: use of closed network connection
	E0729 11:33:49.938538       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51896: use of closed network connection
	E0729 11:33:50.112311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51920: use of closed network connection
	E0729 11:33:50.291182       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51936: use of closed network connection
	E0729 11:33:50.470643       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51952: use of closed network connection
	W0729 11:35:20.087318       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.244]
	
	
	==> kube-controller-manager [0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d] <==
	I0729 11:33:42.483797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.018617ms"
	I0729 11:33:42.483958       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.429µs"
	I0729 11:33:42.499060       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.73µs"
	I0729 11:33:42.503898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.279µs"
	I0729 11:33:42.602962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.870781ms"
	I0729 11:33:42.751049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="147.982928ms"
	I0729 11:33:42.773524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.412576ms"
	I0729 11:33:42.773737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.862µs"
	I0729 11:33:42.826198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.806913ms"
	I0729 11:33:42.828414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.92µs"
	I0729 11:33:44.284995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.158µs"
	I0729 11:33:45.169101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.028258ms"
	I0729 11:33:45.169966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="208.869µs"
	I0729 11:33:45.470195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.740556ms"
	I0729 11:33:45.470853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="283.717µs"
	I0729 11:33:47.241491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.966998ms"
	I0729 11:33:47.241822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.057µs"
	E0729 11:34:19.801986       1 certificate_controller.go:146] Sync csr-2557s failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-2557s": the object has been modified; please apply your changes to the latest version and try again
	I0729 11:34:20.090225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-691698-m04\" does not exist"
	I0729 11:34:20.120158       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-691698-m04" podCIDRs=["10.244.3.0/24"]
	I0729 11:34:23.680106       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-691698-m04"
	I0729 11:34:39.639383       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-691698-m04"
	I0729 11:35:37.066172       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-691698-m04"
	I0729 11:35:37.131448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.969767ms"
	I0729 11:35:37.132185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.299µs"
	
	
	==> kube-proxy [2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323] <==
	I0729 11:31:15.469088       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:31:15.512253       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.244"]
	I0729 11:31:15.584276       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:31:15.584317       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:31:15.584333       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:31:15.587247       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:31:15.587800       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:31:15.587855       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:31:15.589586       1 config.go:192] "Starting service config controller"
	I0729 11:31:15.590577       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:31:15.590875       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:31:15.590911       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:31:15.592642       1 config.go:319] "Starting node config controller"
	I0729 11:31:15.593517       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:31:15.691565       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:31:15.691660       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:31:15.693908       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0] <==
	W0729 11:30:59.393612       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:30:59.393653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:30:59.483988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:30:59.484034       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:30:59.504614       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:30:59.504700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:30:59.531800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:30:59.531829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:30:59.549354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:30:59.549416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:30:59.573894       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:30:59.573973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:30:59.609853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:30:59.609951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:30:59.676813       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:30:59.676934       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:31:01.526304       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 11:34:20.179491       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-pqx6x\": pod kube-proxy-pqx6x is already assigned to node \"ha-691698-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-pqx6x" node="ha-691698-m04"
	E0729 11:34:20.180944       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 88b81468-2d64-4496-a593-68698a8a161e(kube-system/kube-proxy-pqx6x) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-pqx6x"
	E0729 11:34:20.181390       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-pqx6x\": pod kube-proxy-pqx6x is already assigned to node \"ha-691698-m04\"" pod="kube-system/kube-proxy-pqx6x"
	I0729 11:34:20.181582       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-pqx6x" node="ha-691698-m04"
	E0729 11:34:20.181307       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pknpn\": pod kindnet-pknpn is already assigned to node \"ha-691698-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pknpn" node="ha-691698-m04"
	E0729 11:34:20.186876       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ea8a7c41-23fc-4ded-80ef-41744345895d(kube-system/kindnet-pknpn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pknpn"
	E0729 11:34:20.187114       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pknpn\": pod kindnet-pknpn is already assigned to node \"ha-691698-m04\"" pod="kube-system/kindnet-pknpn"
	I0729 11:34:20.187205       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pknpn" node="ha-691698-m04"
	
	
	==> kubelet <==
	Jul 29 11:34:01 ha-691698 kubelet[1382]: E0729 11:34:01.566975    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:34:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:34:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:34:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:34:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:35:01 ha-691698 kubelet[1382]: E0729 11:35:01.568463    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:35:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:35:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:35:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:35:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:36:01 ha-691698 kubelet[1382]: E0729 11:36:01.568974    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:36:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:36:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:36:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:36:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:37:01 ha-691698 kubelet[1382]: E0729 11:37:01.567565    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:37:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:37:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:37:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:37:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:38:01 ha-691698 kubelet[1382]: E0729 11:38:01.568029    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:38:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:38:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:38:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:38:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-691698 -n ha-691698
helpers_test.go:261: (dbg) Run:  kubectl --context ha-691698 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (400.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-691698 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-691698 -v=7 --alsologtostderr
E0729 11:39:27.393297  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:39:55.078046  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-691698 -v=7 --alsologtostderr: exit status 82 (2m1.865824545s)

                                                
                                                
-- stdout --
	* Stopping node "ha-691698-m04"  ...
	* Stopping node "ha-691698-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:38:22.907161  141778 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:38:22.907282  141778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:38:22.907292  141778 out.go:304] Setting ErrFile to fd 2...
	I0729 11:38:22.907296  141778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:38:22.907510  141778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:38:22.907734  141778 out.go:298] Setting JSON to false
	I0729 11:38:22.907863  141778 mustload.go:65] Loading cluster: ha-691698
	I0729 11:38:22.908245  141778 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:38:22.908332  141778 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:38:22.908513  141778 mustload.go:65] Loading cluster: ha-691698
	I0729 11:38:22.908637  141778 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:38:22.908669  141778 stop.go:39] StopHost: ha-691698-m04
	I0729 11:38:22.909109  141778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:22.909159  141778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:22.924904  141778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38401
	I0729 11:38:22.925386  141778 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:22.925990  141778 main.go:141] libmachine: Using API Version  1
	I0729 11:38:22.926029  141778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:22.926359  141778 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:22.928915  141778 out.go:177] * Stopping node "ha-691698-m04"  ...
	I0729 11:38:22.930160  141778 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 11:38:22.930198  141778 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:38:22.930487  141778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 11:38:22.930516  141778 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:38:22.933512  141778 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:22.933905  141778 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:34:05 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:38:22.933939  141778 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:38:22.934145  141778 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:38:22.934337  141778 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:38:22.934514  141778 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:38:22.934659  141778 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:38:23.015369  141778 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 11:38:23.069315  141778 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 11:38:23.123423  141778 main.go:141] libmachine: Stopping "ha-691698-m04"...
	I0729 11:38:23.123459  141778 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:38:23.125137  141778 main.go:141] libmachine: (ha-691698-m04) Calling .Stop
	I0729 11:38:23.128891  141778 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 0/120
	I0729 11:38:24.291147  141778 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:38:24.292490  141778 main.go:141] libmachine: Machine "ha-691698-m04" was stopped.
	I0729 11:38:24.292509  141778 stop.go:75] duration metric: took 1.362354159s to stop
	I0729 11:38:24.292549  141778 stop.go:39] StopHost: ha-691698-m03
	I0729 11:38:24.292841  141778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:38:24.292885  141778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:38:24.308084  141778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0729 11:38:24.308486  141778 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:38:24.309005  141778 main.go:141] libmachine: Using API Version  1
	I0729 11:38:24.309031  141778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:38:24.309375  141778 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:38:24.311205  141778 out.go:177] * Stopping node "ha-691698-m03"  ...
	I0729 11:38:24.312307  141778 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 11:38:24.312335  141778 main.go:141] libmachine: (ha-691698-m03) Calling .DriverName
	I0729 11:38:24.312566  141778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 11:38:24.312592  141778 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHHostname
	I0729 11:38:24.315681  141778 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:24.316135  141778 main.go:141] libmachine: (ha-691698-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:96:46", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:32:40 +0000 UTC Type:0 Mac:52:54:00:67:96:46 Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-691698-m03 Clientid:01:52:54:00:67:96:46}
	I0729 11:38:24.316166  141778 main.go:141] libmachine: (ha-691698-m03) DBG | domain ha-691698-m03 has defined IP address 192.168.39.23 and MAC address 52:54:00:67:96:46 in network mk-ha-691698
	I0729 11:38:24.316355  141778 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHPort
	I0729 11:38:24.316563  141778 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHKeyPath
	I0729 11:38:24.316713  141778 main.go:141] libmachine: (ha-691698-m03) Calling .GetSSHUsername
	I0729 11:38:24.316861  141778 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m03/id_rsa Username:docker}
	I0729 11:38:24.407871  141778 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 11:38:24.461267  141778 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 11:38:24.515797  141778 main.go:141] libmachine: Stopping "ha-691698-m03"...
	I0729 11:38:24.515825  141778 main.go:141] libmachine: (ha-691698-m03) Calling .GetState
	I0729 11:38:24.517526  141778 main.go:141] libmachine: (ha-691698-m03) Calling .Stop
	I0729 11:38:24.521141  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 0/120
	I0729 11:38:25.522592  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 1/120
	I0729 11:38:26.524107  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 2/120
	I0729 11:38:27.525638  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 3/120
	I0729 11:38:28.527351  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 4/120
	I0729 11:38:29.529439  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 5/120
	I0729 11:38:30.531049  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 6/120
	I0729 11:38:31.532661  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 7/120
	I0729 11:38:32.534386  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 8/120
	I0729 11:38:33.535849  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 9/120
	I0729 11:38:34.538237  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 10/120
	I0729 11:38:35.539672  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 11/120
	I0729 11:38:36.540882  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 12/120
	I0729 11:38:37.542711  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 13/120
	I0729 11:38:38.544183  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 14/120
	I0729 11:38:39.546211  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 15/120
	I0729 11:38:40.547763  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 16/120
	I0729 11:38:41.549198  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 17/120
	I0729 11:38:42.551515  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 18/120
	I0729 11:38:43.552919  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 19/120
	I0729 11:38:44.554878  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 20/120
	I0729 11:38:45.556432  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 21/120
	I0729 11:38:46.557873  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 22/120
	I0729 11:38:47.559337  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 23/120
	I0729 11:38:48.560884  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 24/120
	I0729 11:38:49.562451  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 25/120
	I0729 11:38:50.564222  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 26/120
	I0729 11:38:51.565756  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 27/120
	I0729 11:38:52.567296  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 28/120
	I0729 11:38:53.568829  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 29/120
	I0729 11:38:54.570992  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 30/120
	I0729 11:38:55.572588  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 31/120
	I0729 11:38:56.574173  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 32/120
	I0729 11:38:57.576147  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 33/120
	I0729 11:38:58.577807  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 34/120
	I0729 11:38:59.579692  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 35/120
	I0729 11:39:00.581057  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 36/120
	I0729 11:39:01.582574  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 37/120
	I0729 11:39:02.584298  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 38/120
	I0729 11:39:03.585630  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 39/120
	I0729 11:39:04.587655  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 40/120
	I0729 11:39:05.589139  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 41/120
	I0729 11:39:06.591656  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 42/120
	I0729 11:39:07.593097  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 43/120
	I0729 11:39:08.594461  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 44/120
	I0729 11:39:09.596495  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 45/120
	I0729 11:39:10.598087  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 46/120
	I0729 11:39:11.599784  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 47/120
	I0729 11:39:12.601365  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 48/120
	I0729 11:39:13.603361  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 49/120
	I0729 11:39:14.605309  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 50/120
	I0729 11:39:15.606844  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 51/120
	I0729 11:39:16.608348  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 52/120
	I0729 11:39:17.609718  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 53/120
	I0729 11:39:18.611177  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 54/120
	I0729 11:39:19.612993  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 55/120
	I0729 11:39:20.614265  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 56/120
	I0729 11:39:21.615932  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 57/120
	I0729 11:39:22.618784  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 58/120
	I0729 11:39:23.620468  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 59/120
	I0729 11:39:24.622373  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 60/120
	I0729 11:39:25.623919  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 61/120
	I0729 11:39:26.625275  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 62/120
	I0729 11:39:27.626653  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 63/120
	I0729 11:39:28.628029  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 64/120
	I0729 11:39:29.629913  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 65/120
	I0729 11:39:30.631216  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 66/120
	I0729 11:39:31.632586  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 67/120
	I0729 11:39:32.634106  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 68/120
	I0729 11:39:33.635495  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 69/120
	I0729 11:39:34.637330  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 70/120
	I0729 11:39:35.638727  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 71/120
	I0729 11:39:36.640150  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 72/120
	I0729 11:39:37.641470  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 73/120
	I0729 11:39:38.642864  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 74/120
	I0729 11:39:39.644808  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 75/120
	I0729 11:39:40.646241  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 76/120
	I0729 11:39:41.648125  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 77/120
	I0729 11:39:42.649325  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 78/120
	I0729 11:39:43.651494  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 79/120
	I0729 11:39:44.653371  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 80/120
	I0729 11:39:45.655419  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 81/120
	I0729 11:39:46.657009  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 82/120
	I0729 11:39:47.658637  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 83/120
	I0729 11:39:48.660160  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 84/120
	I0729 11:39:49.662151  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 85/120
	I0729 11:39:50.663485  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 86/120
	I0729 11:39:51.664890  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 87/120
	I0729 11:39:52.667002  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 88/120
	I0729 11:39:53.668560  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 89/120
	I0729 11:39:54.670347  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 90/120
	I0729 11:39:55.671727  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 91/120
	I0729 11:39:56.673184  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 92/120
	I0729 11:39:57.675646  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 93/120
	I0729 11:39:58.677263  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 94/120
	I0729 11:39:59.679302  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 95/120
	I0729 11:40:00.680703  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 96/120
	I0729 11:40:01.682221  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 97/120
	I0729 11:40:02.684834  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 98/120
	I0729 11:40:03.686128  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 99/120
	I0729 11:40:04.688036  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 100/120
	I0729 11:40:05.689541  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 101/120
	I0729 11:40:06.691057  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 102/120
	I0729 11:40:07.692754  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 103/120
	I0729 11:40:08.694189  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 104/120
	I0729 11:40:09.696144  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 105/120
	I0729 11:40:10.698005  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 106/120
	I0729 11:40:11.699628  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 107/120
	I0729 11:40:12.701043  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 108/120
	I0729 11:40:13.702426  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 109/120
	I0729 11:40:14.704506  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 110/120
	I0729 11:40:15.705737  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 111/120
	I0729 11:40:16.708045  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 112/120
	I0729 11:40:17.709438  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 113/120
	I0729 11:40:18.711179  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 114/120
	I0729 11:40:19.713294  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 115/120
	I0729 11:40:20.714912  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 116/120
	I0729 11:40:21.716404  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 117/120
	I0729 11:40:22.717936  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 118/120
	I0729 11:40:23.719529  141778 main.go:141] libmachine: (ha-691698-m03) Waiting for machine to stop 119/120
	I0729 11:40:24.720680  141778 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 11:40:24.720747  141778 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 11:40:24.723017  141778 out.go:177] 
	W0729 11:40:24.724542  141778 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 11:40:24.724558  141778 out.go:239] * 
	* 
	W0729 11:40:24.726810  141778 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:40:24.728219  141778 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-691698 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-691698 --wait=true -v=7 --alsologtostderr
E0729 11:44:27.394238  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-691698 --wait=true -v=7 --alsologtostderr: (4m36.001119191s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-691698
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-691698 -n ha-691698
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-691698 logs -n 25: (1.681734938s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m02:/home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m04 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp testdata/cp-test.txt                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698:/home/docker/cp-test_ha-691698-m04_ha-691698.txt                       |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698 sudo cat                                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698.txt                                 |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m02:/home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03:/home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m03 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-691698 node stop m02 -v=7                                                     | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-691698 node start m02 -v=7                                                    | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-691698 -v=7                                                           | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-691698 -v=7                                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-691698 --wait=true -v=7                                                    | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:45 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-691698                                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:45 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:40:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:40:24.776357  142228 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:40:24.776768  142228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:40:24.776781  142228 out.go:304] Setting ErrFile to fd 2...
	I0729 11:40:24.776788  142228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:40:24.777255  142228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:40:24.777987  142228 out.go:298] Setting JSON to false
	I0729 11:40:24.779118  142228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4976,"bootTime":1722248249,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:40:24.779187  142228 start.go:139] virtualization: kvm guest
	I0729 11:40:24.781387  142228 out.go:177] * [ha-691698] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:40:24.782985  142228 notify.go:220] Checking for updates...
	I0729 11:40:24.783002  142228 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 11:40:24.784449  142228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:40:24.785837  142228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:40:24.787286  142228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:40:24.788593  142228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:40:24.790034  142228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:40:24.791895  142228 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:40:24.792024  142228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:40:24.792472  142228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:40:24.792552  142228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:40:24.808026  142228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44009
	I0729 11:40:24.808444  142228 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:40:24.809076  142228 main.go:141] libmachine: Using API Version  1
	I0729 11:40:24.809106  142228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:40:24.809411  142228 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:40:24.809591  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:40:24.847051  142228 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:40:24.848333  142228 start.go:297] selected driver: kvm2
	I0729 11:40:24.848348  142228 start.go:901] validating driver "kvm2" against &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:40:24.848498  142228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:40:24.848905  142228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:40:24.849014  142228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:40:24.866887  142228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:40:24.867607  142228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:40:24.867656  142228 cni.go:84] Creating CNI manager for ""
	I0729 11:40:24.867663  142228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 11:40:24.867728  142228 start.go:340] cluster config:
	{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:40:24.867912  142228 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:40:24.870070  142228 out.go:177] * Starting "ha-691698" primary control-plane node in "ha-691698" cluster
	I0729 11:40:24.871246  142228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:40:24.871284  142228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:40:24.871298  142228 cache.go:56] Caching tarball of preloaded images
	I0729 11:40:24.871377  142228 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:40:24.871390  142228 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:40:24.871551  142228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:40:24.871793  142228 start.go:360] acquireMachinesLock for ha-691698: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:40:24.871838  142228 start.go:364] duration metric: took 25.882µs to acquireMachinesLock for "ha-691698"
	I0729 11:40:24.871850  142228 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:40:24.871856  142228 fix.go:54] fixHost starting: 
	I0729 11:40:24.872198  142228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:40:24.872236  142228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:40:24.887142  142228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0729 11:40:24.887585  142228 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:40:24.888041  142228 main.go:141] libmachine: Using API Version  1
	I0729 11:40:24.888067  142228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:40:24.888429  142228 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:40:24.888602  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:40:24.888753  142228 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:40:24.890504  142228 fix.go:112] recreateIfNeeded on ha-691698: state=Running err=<nil>
	W0729 11:40:24.890525  142228 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:40:24.893184  142228 out.go:177] * Updating the running kvm2 "ha-691698" VM ...
	I0729 11:40:24.894638  142228 machine.go:94] provisionDockerMachine start ...
	I0729 11:40:24.894660  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:40:24.894892  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:24.897375  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:24.897825  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:24.897856  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:24.898018  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:24.898197  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:24.898344  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:24.898480  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:24.898661  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:24.898896  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:24.898911  142228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:40:25.005231  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698
	
	I0729 11:40:25.005260  142228 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:40:25.005552  142228 buildroot.go:166] provisioning hostname "ha-691698"
	I0729 11:40:25.005574  142228 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:40:25.005779  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.008541  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.008878  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.008907  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.009069  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.009262  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.009422  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.009522  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.009646  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:25.009857  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:25.009876  142228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698 && echo "ha-691698" | sudo tee /etc/hostname
	I0729 11:40:25.131528  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698
	
	I0729 11:40:25.131580  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.134273  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.134739  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.134769  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.134954  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.135172  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.135382  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.135545  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.135689  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:25.135881  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:25.135903  142228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:40:25.241772  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:40:25.241800  142228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:40:25.241818  142228 buildroot.go:174] setting up certificates
	I0729 11:40:25.241828  142228 provision.go:84] configureAuth start
	I0729 11:40:25.241836  142228 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:40:25.242136  142228 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:40:25.245000  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.245437  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.245463  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.245642  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.248013  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.248332  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.248356  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.248462  142228 provision.go:143] copyHostCerts
	I0729 11:40:25.248493  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:40:25.248542  142228 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:40:25.248553  142228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:40:25.248630  142228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:40:25.248744  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:40:25.248775  142228 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:40:25.248785  142228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:40:25.248828  142228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:40:25.248890  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:40:25.248912  142228 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:40:25.248921  142228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:40:25.248953  142228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:40:25.249035  142228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698 san=[127.0.0.1 192.168.39.244 ha-691698 localhost minikube]
	I0729 11:40:25.324097  142228 provision.go:177] copyRemoteCerts
	I0729 11:40:25.324177  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:40:25.324208  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.327170  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.327563  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.327597  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.327753  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.327970  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.328176  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.328340  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:40:25.410895  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:40:25.410998  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:40:25.437258  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:40:25.437330  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:40:25.462358  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:40:25.462423  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 11:40:25.487750  142228 provision.go:87] duration metric: took 245.906243ms to configureAuth
	I0729 11:40:25.487785  142228 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:40:25.488085  142228 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:40:25.488169  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.490646  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.490997  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.491024  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.491204  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.491412  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.491594  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.491728  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.491857  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:25.492022  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:25.492037  142228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:41:56.288822  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:41:56.288856  142228 machine.go:97] duration metric: took 1m31.394204468s to provisionDockerMachine
	I0729 11:41:56.288870  142228 start.go:293] postStartSetup for "ha-691698" (driver="kvm2")
	I0729 11:41:56.288882  142228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:41:56.288899  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.289266  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:41:56.289297  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.292548  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.292891  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.292921  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.293127  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.293338  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.293488  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.293612  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:41:56.375916  142228 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:41:56.380305  142228 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:41:56.380341  142228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:41:56.380415  142228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:41:56.380488  142228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:41:56.380499  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:41:56.380588  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:41:56.390445  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:41:56.414898  142228 start.go:296] duration metric: took 126.009826ms for postStartSetup
	I0729 11:41:56.414963  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.415323  142228 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 11:41:56.415352  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.418237  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.418589  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.418623  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.418827  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.419035  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.419220  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.419380  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	W0729 11:41:56.503390  142228 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 11:41:56.503423  142228 fix.go:56] duration metric: took 1m31.631562238s for fixHost
	I0729 11:41:56.503450  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.506250  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.506656  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.506677  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.506870  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.507069  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.507259  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.507400  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.507583  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:41:56.507751  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:41:56.507760  142228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:41:56.614056  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253316.587985785
	
	I0729 11:41:56.614085  142228 fix.go:216] guest clock: 1722253316.587985785
	I0729 11:41:56.614095  142228 fix.go:229] Guest: 2024-07-29 11:41:56.587985785 +0000 UTC Remote: 2024-07-29 11:41:56.503434193 +0000 UTC m=+91.765146958 (delta=84.551592ms)
	I0729 11:41:56.614123  142228 fix.go:200] guest clock delta is within tolerance: 84.551592ms
	I0729 11:41:56.614131  142228 start.go:83] releasing machines lock for "ha-691698", held for 1m31.7422846s
	I0729 11:41:56.614158  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.614457  142228 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:41:56.617095  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.617554  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.617580  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.617722  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.618341  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.618536  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.618659  142228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:41:56.618721  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.618777  142228 ssh_runner.go:195] Run: cat /version.json
	I0729 11:41:56.618803  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.621319  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.621422  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.621684  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.621709  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.621884  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.621904  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.621908  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.622091  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.622150  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.622253  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.622320  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.622367  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.622470  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:41:56.622476  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:41:56.733830  142228 ssh_runner.go:195] Run: systemctl --version
	I0729 11:41:56.743088  142228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:41:56.907039  142228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:41:56.913007  142228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:41:56.913075  142228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:41:56.922959  142228 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 11:41:56.922986  142228 start.go:495] detecting cgroup driver to use...
	I0729 11:41:56.923051  142228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:41:56.940108  142228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:41:56.954882  142228 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:41:56.954961  142228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:41:56.969223  142228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:41:56.983902  142228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:41:57.130869  142228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:41:57.278208  142228 docker.go:233] disabling docker service ...
	I0729 11:41:57.278293  142228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:41:57.295736  142228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:41:57.310177  142228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:41:57.457062  142228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:41:57.603860  142228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:41:57.618170  142228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:41:57.637555  142228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:41:57.637643  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.648700  142228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:41:57.648763  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.659809  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.670683  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.681281  142228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:41:57.692558  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.703876  142228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.714676  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.725026  142228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:41:57.734476  142228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:41:57.744043  142228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:41:57.896926  142228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:41:58.175176  142228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:41:58.175264  142228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:41:58.180455  142228 start.go:563] Will wait 60s for crictl version
	I0729 11:41:58.180528  142228 ssh_runner.go:195] Run: which crictl
	I0729 11:41:58.184306  142228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:41:58.221661  142228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:41:58.221749  142228 ssh_runner.go:195] Run: crio --version
	I0729 11:41:58.249977  142228 ssh_runner.go:195] Run: crio --version
	I0729 11:41:58.282116  142228 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:41:58.283532  142228 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:41:58.286259  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:58.286626  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:58.286661  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:58.286874  142228 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:41:58.291838  142228 kubeadm.go:883] updating cluster {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:41:58.291978  142228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:41:58.292022  142228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:41:58.335821  142228 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:41:58.335847  142228 crio.go:433] Images already preloaded, skipping extraction
	I0729 11:41:58.335896  142228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:41:58.371463  142228 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:41:58.371491  142228 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:41:58.371505  142228 kubeadm.go:934] updating node { 192.168.39.244 8443 v1.30.3 crio true true} ...
	I0729 11:41:58.371640  142228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:41:58.371728  142228 ssh_runner.go:195] Run: crio config
	I0729 11:41:58.421750  142228 cni.go:84] Creating CNI manager for ""
	I0729 11:41:58.421778  142228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 11:41:58.421790  142228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:41:58.421824  142228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-691698 NodeName:ha-691698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:41:58.422000  142228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-691698"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:41:58.422024  142228 kube-vip.go:115] generating kube-vip config ...
	I0729 11:41:58.422077  142228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:41:58.433584  142228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:41:58.433737  142228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:41:58.433805  142228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:41:58.443766  142228 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:41:58.443864  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 11:41:58.453558  142228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 11:41:58.473241  142228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:41:58.493642  142228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 11:41:58.513420  142228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 11:41:58.536252  142228 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:41:58.540664  142228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:41:58.695769  142228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:41:58.710227  142228 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.244
	I0729 11:41:58.710254  142228 certs.go:194] generating shared ca certs ...
	I0729 11:41:58.710270  142228 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:41:58.710437  142228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:41:58.710535  142228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:41:58.710550  142228 certs.go:256] generating profile certs ...
	I0729 11:41:58.710627  142228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:41:58.710656  142228 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36
	I0729 11:41:58.710668  142228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.5 192.168.39.23 192.168.39.254]
	I0729 11:41:58.871227  142228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36 ...
	I0729 11:41:58.871262  142228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36: {Name:mkdaac54e51c3106526d4dc2fc72bc59c935ccf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:41:58.871465  142228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36 ...
	I0729 11:41:58.871482  142228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36: {Name:mkef0f29cf9214d3068dd6b1e248f6f75204c16b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:41:58.871585  142228 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:41:58.871744  142228 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:41:58.871883  142228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:41:58.871899  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:41:58.871914  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:41:58.871928  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:41:58.871942  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:41:58.871954  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:41:58.871966  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:41:58.871978  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:41:58.871991  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:41:58.872037  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:41:58.872064  142228 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:41:58.872073  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:41:58.872092  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:41:58.872115  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:41:58.872140  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:41:58.872176  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:41:58.872201  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:41:58.872214  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:41:58.872227  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:41:58.872871  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:41:58.923682  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:41:59.028122  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:41:59.102997  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:41:59.291661  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:41:59.498211  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:41:59.702131  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:41:59.890939  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:42:00.008589  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:42:00.150765  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:42:00.247320  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:42:00.283964  142228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:42:00.307915  142228 ssh_runner.go:195] Run: openssl version
	I0729 11:42:00.316193  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:42:00.335882  142228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:42:00.345245  142228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:42:00.345319  142228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:42:00.363844  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:42:00.395775  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:42:00.423321  142228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:42:00.431700  142228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:42:00.431773  142228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:42:00.440624  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:42:00.455032  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:42:00.468899  142228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:42:00.475414  142228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:42:00.475482  142228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:42:00.483106  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:42:00.496646  142228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:42:00.503675  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:42:00.509769  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:42:00.517537  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:42:00.525966  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:42:00.534382  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:42:00.542592  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:42:00.550650  142228 kubeadm.go:392] StartCluster: {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:42:00.550776  142228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:42:00.550825  142228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:42:00.603481  142228 cri.go:89] found id: "29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843"
	I0729 11:42:00.603514  142228 cri.go:89] found id: "ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11"
	I0729 11:42:00.603521  142228 cri.go:89] found id: "51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46"
	I0729 11:42:00.603531  142228 cri.go:89] found id: "05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97"
	I0729 11:42:00.603536  142228 cri.go:89] found id: "e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566"
	I0729 11:42:00.603540  142228 cri.go:89] found id: "f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397"
	I0729 11:42:00.603544  142228 cri.go:89] found id: "24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07"
	I0729 11:42:00.603548  142228 cri.go:89] found id: "5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4"
	I0729 11:42:00.603552  142228 cri.go:89] found id: "cfc6bb6aa4f7b3d7c9249429bd4afd574bc9d92d4bb437c37d3259df42dee674"
	I0729 11:42:00.603560  142228 cri.go:89] found id: "7d15eebdab78d379c854debcbf3c7c75ebc774b65df62b203aa7b6aafcd4c7ae"
	I0729 11:42:00.603564  142228 cri.go:89] found id: "3e163d1ef4b1b78646dacf650dea3882b88d05b40fc7721405a3095135eab4bb"
	I0729 11:42:00.603570  142228 cri.go:89] found id: "0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53"
	I0729 11:42:00.603573  142228 cri.go:89] found id: "833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486"
	I0729 11:42:00.603578  142228 cri.go:89] found id: "2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756"
	I0729 11:42:00.603585  142228 cri.go:89] found id: "2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323"
	I0729 11:42:00.603590  142228 cri.go:89] found id: "24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0"
	I0729 11:42:00.603593  142228 cri.go:89] found id: "1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d"
	I0729 11:42:00.603600  142228 cri.go:89] found id: "0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d"
	I0729 11:42:00.603607  142228 cri.go:89] found id: ""
	I0729 11:42:00.603668  142228 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.482224604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253501482200401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=659517d1-5ce1-4159-910f-f39685ca94b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.483045057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=500951cd-f55b-4f21-a12c-230947ce1e14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.483130279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=500951cd-f55b-4f21-a12c-230947ce1e14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.483596695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722253319760490986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name
: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253319590114552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722253319360482195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5f
fb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Ann
otations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722252826211018342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annot
ations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690309362743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690267316165,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722252678491071355,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252675058385510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252655364856326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252655267651801,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=500951cd-f55b-4f21-a12c-230947ce1e14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.532569487Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f96717bd-42ea-47c6-91d6-f899336ed516 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.532695593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f96717bd-42ea-47c6-91d6-f899336ed516 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.534023581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cbb2747-6a20-4f09-b9c4-44b05dfe48ee name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.534468604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253501534436792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cbb2747-6a20-4f09-b9c4-44b05dfe48ee name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.534982488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f66452d-c06f-46b2-bfa2-f68a2bca42bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.535052326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f66452d-c06f-46b2-bfa2-f68a2bca42bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.535585994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722253319760490986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name
: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253319590114552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722253319360482195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5f
fb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Ann
otations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722252826211018342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annot
ations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690309362743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690267316165,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722252678491071355,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252675058385510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252655364856326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252655267651801,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f66452d-c06f-46b2-bfa2-f68a2bca42bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.545259685Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=614e0d91-be6b-4710-9753-1631eb4ae659 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.545597352Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-t69zw,Uid:ba70f798-7f59-4cd9-955c-82ce880ebcf9,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253352717098740,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:33:42.445558081Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-691698,Uid:3bba932b45fc610b002ddc98e5da80b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722253330609497406,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{kubernetes.io/config.hash: 3bba932b45fc610b002ddc98e5da80b5,kubernetes.io/config.seen: 2024-07-29T11:41:58.509244854Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-r48d8,Uid:4d0329d8-26c1-49e5-8af9-8ecda56993ca,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253319151943046,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-29T11:31:29.718988943Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-p7zbj,Uid:7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253319094430702,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:31:29.714822987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-691698,Uid:0b9e5f0877ca264a45eb8a7bf07a4ef2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253319073697655,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.244:8443,kubernetes.io/config.hash: 0b9e5f0877ca264a45eb8a7bf07a4ef2,kubernetes.io/config.seen: 2024-07-29T11:31:01.496124903Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:694c60e1-9d4e-4fea-96e6-21554bbf1aaa,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253319067553645,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T11:31:29.724327502Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&PodSandboxMetadata{Name:kindnet-gl972,Uid:
caf4ea26-7d7a-419f-9493-67639c78ed1d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253319010339085,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:31:14.529191557Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&PodSandboxMetadata{Name:etcd-ha-691698,Uid:3e090ac15413f491114ca03adef34911,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253318981988758,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34
911,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.244:2379,kubernetes.io/config.hash: 3e090ac15413f491114ca03adef34911,kubernetes.io/config.seen: 2024-07-29T11:31:01.496119926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-691698,Uid:3049f42a07ecb14cd8bfdb4d5cfad196,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253318970279062,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3049f42a07ecb14cd8bfdb4d5cfad196,kubernetes.io/config.seen: 2024-07-29T11:31:01.496126009Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&PodSandboxMetadata{Name:kube-proxy-5hn2s,Uid:b73c788f-9f8d-421e-b967-89b9154ea946,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253318965941779,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T11:31:14.521353495Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-691698,Uid:9bb5ffb5c77b0a888651c9baeb69857d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722253318956827069,Labels:map[string]string{component: kube-scheduler,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9bb5ffb5c77b0a888651c9baeb69857d,kubernetes.io/config.seen: 2024-07-29T11:31:01.496127094Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=614e0d91-be6b-4710-9753-1631eb4ae659 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.546607514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4089a8c-06c6-434e-937a-e91291828f17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.546727800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4089a8c-06c6-434e-937a-e91291828f17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.547572345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash:
cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubern
etes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8
d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4089a8c-06c6-434e-937a-e91291828f17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.591935091Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3eff23ac-0ac0-4307-beb9-f1e0cb8ea05c name=/runtime.v1.ImageService/ListImages
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.592442635Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e],Size_:112198984,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{
Id:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266 registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,RepoTags:[registry.k8s.io/kube-proxy:v1.30.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80 registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65],Size_:85953945,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 re
gistry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d8
67d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,RepoTags:[docker.io/kindest/kindnetd:v20240715-585640e9],RepoDigests:[docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115 docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kub
e-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,RepoTags:[docker.io/kindest/kindnetd:v20240719-e7903573],RepoDigests:[docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9 docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a],Size_:87174707,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=3eff23ac-0ac0-4307-beb9-f1e0cb8ea05c name=/runtim
e.v1.ImageService/ListImages
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.595158406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5c9c944-1fcc-4bc7-92df-51b185de6739 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.595258340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5c9c944-1fcc-4bc7-92df-51b185de6739 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.596574797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e625a3e-1f55-4969-9964-810d9f376c51 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.597170723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253501597148025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e625a3e-1f55-4969-9964-810d9f376c51 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.597860997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d6ed231-6118-4bc9-be8f-c2907745b5ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.597937880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d6ed231-6118-4bc9-be8f-c2907745b5ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:45:01 ha-691698 crio[3887]: time="2024-07-29 11:45:01.598372677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722253319760490986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name
: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253319590114552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722253319360482195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5f
fb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Ann
otations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722252826211018342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annot
ations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690309362743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690267316165,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722252678491071355,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252675058385510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252655364856326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252655267651801,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d6ed231-6118-4bc9-be8f-c2907745b5ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	973ad904749b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 seconds ago       Running             storage-provisioner       6                   26c07c1103338       storage-provisioner
	8cdc756e57258       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago       Running             kube-controller-manager   2                   656cbf9360b23       kube-controller-manager-ha-691698
	76745dad7b41c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago       Running             busybox                   1                   50f119d8186f4       busybox-fc5497c4f-t69zw
	9fb1ff299a498       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago       Running             kube-apiserver            3                   2b7c38387340a       kube-apiserver-ha-691698
	a86f388a73255       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago       Running             kube-vip                  0                   2ab993e81dcd5       kube-vip-ha-691698
	29581f41078e4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   7a603ee93794e       coredns-7db6d8ff4d-r48d8
	ccbd2ebd46e13       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   db19f608bd022       coredns-7db6d8ff4d-p7zbj
	2d706a5426fe1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       5                   26c07c1103338       storage-provisioner
	51064326e4ef3       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      3 minutes ago       Running             kindnet-cni               1                   e00235e1a109f       kindnet-gl972
	05903437cede2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      3 minutes ago       Exited              kube-apiserver            2                   2b7c38387340a       kube-apiserver-ha-691698
	e32dad0451680       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   e671b1b6a37b9       etcd-ha-691698
	f0c4593139567       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      3 minutes ago       Running             kube-proxy                1                   4c892374c85fc       kube-proxy-5hn2s
	24e35a070016e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      3 minutes ago       Running             kube-scheduler            1                   bcab417350922       kube-scheduler-ha-691698
	5fb3e15e6fe5f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      3 minutes ago       Exited              kube-controller-manager   1                   656cbf9360b23       kube-controller-manager-ha-691698
	238fb47cd6e36       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Exited              busybox                   0                   764f56dfda80f       busybox-fc5497c4f-t69zw
	0d819119d1f04       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Exited              coredns                   0                   d32f436d019c4       coredns-7db6d8ff4d-r48d8
	833566290ab18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Exited              coredns                   0                   8d892f55e419c       coredns-7db6d8ff4d-p7zbj
	2c476db3ff154       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago      Exited              kindnet-cni               0                   ff04fbe0e7040       kindnet-gl972
	2da9ca3c5237b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Exited              kube-proxy                0                   7978ad5ef51fb       kube-proxy-5hn2s
	24326f59696b1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago      Exited              kube-scheduler            0                   f7a6dae3abd7e       kube-scheduler-ha-691698
	1d0e28e4eb5d8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago      Exited              etcd                      0                   476f4c4be9581       etcd-ha-691698
	
	
	==> coredns [0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53] <==
	[INFO] 10.244.0.4:50254 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000059389s
	[INFO] 10.244.0.4:48812 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00188043s
	[INFO] 10.244.1.2:43643 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173662s
	[INFO] 10.244.1.2:52260 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003470125s
	[INFO] 10.244.1.2:54673 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136747s
	[INFO] 10.244.2.2:34318 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000273221s
	[INFO] 10.244.2.2:60262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476515s
	[INFO] 10.244.2.2:57052 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142747s
	[INFO] 10.244.2.2:54120 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108997s
	[INFO] 10.244.1.2:44298 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081482s
	[INFO] 10.244.1.2:57785 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116033s
	[INFO] 10.244.2.2:38389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154869s
	[INFO] 10.244.2.2:33473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139061s
	[INFO] 10.244.2.2:36153 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064585s
	[INFO] 10.244.0.4:36379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097216s
	[INFO] 10.244.0.4:47834 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063726s
	[INFO] 10.244.1.2:33111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120166s
	[INFO] 10.244.2.2:43983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122897s
	[INFO] 10.244.2.2:35012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148813s
	[INFO] 10.244.2.2:40714 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011869s
	[INFO] 10.244.0.4:44215 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086794s
	[INFO] 10.244.0.4:38040 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005703s
	[INFO] 10.244.0.4:50677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108307s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41828->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41828->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41812->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[997064697]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 11:42:12.255) (total time: 12362ms):
	Trace[997064697]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41812->10.96.0.1:443: read: connection reset by peer 12361ms (11:42:24.617)
	Trace[997064697]: [12.362009663s] [12.362009663s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41812->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[384103912]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 11:42:34.062) (total time: 10000ms):
	Trace[384103912]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:42:44.062)
	Trace[384103912]: [10.000911224s] [10.000911224s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486] <==
	[INFO] 10.244.2.2:34056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219216s
	[INFO] 10.244.2.2:60410 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161507s
	[INFO] 10.244.0.4:59522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147092s
	[INFO] 10.244.0.4:33605 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742361s
	[INFO] 10.244.0.4:54567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076754s
	[INFO] 10.244.0.4:35616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072926s
	[INFO] 10.244.0.4:50762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001270357s
	[INFO] 10.244.0.4:56719 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059193s
	[INFO] 10.244.0.4:42114 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124091s
	[INFO] 10.244.0.4:54680 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047725s
	[INFO] 10.244.1.2:33443 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111093s
	[INFO] 10.244.1.2:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102839s
	[INFO] 10.244.2.2:47142 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084964s
	[INFO] 10.244.0.4:35741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015832s
	[INFO] 10.244.0.4:39817 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103529s
	[INFO] 10.244.1.2:45931 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134869s
	[INFO] 10.244.1.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000217632s
	[INFO] 10.244.1.2:59273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107311s
	[INFO] 10.244.2.2:49049 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000205027s
	[INFO] 10.244.0.4:42280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127437s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[165632698]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 11:42:33.434) (total time: 10001ms):
	Trace[165632698]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:42:43.435)
	Trace[165632698]: [10.001285626s] [10.001285626s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-691698
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_31_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:30:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:45:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:31:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-691698
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ffcbde1a62f4ed28ef2171c0da37339
	  System UUID:                8ffcbde1-a62f-4ed2-8ef2-171c0da37339
	  Boot ID:                    f8eb0442-fda7-4803-ab40-821f5c33cb8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t69zw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-p7zbj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-r48d8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-691698                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-gl972                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-691698             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-691698    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-5hn2s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-691698             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-691698                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 2m17s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m    kubelet          Node ha-691698 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m    kubelet          Node ha-691698 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m    kubelet          Node ha-691698 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-691698 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   RegisteredNode           11m    node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Warning  ContainerGCFailed        4m1s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m3s   node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   RegisteredNode           2m3s   node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   RegisteredNode           29s    node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	
	
	Name:               ha-691698-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_32_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:32:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:44:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:43:34 +0000   Mon, 29 Jul 2024 11:42:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:43:34 +0000   Mon, 29 Jul 2024 11:42:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:43:34 +0000   Mon, 29 Jul 2024 11:42:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:43:34 +0000   Mon, 29 Jul 2024 11:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-691698-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c019d6e64b644eff86b333652cd5328b
	  System UUID:                c019d6e6-4b64-4eff-86b3-33652cd5328b
	  Boot ID:                    8d642b6f-d885-4b47-8890-605208e38eb4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22qb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-691698-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-wrx27                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-691698-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-691698-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-8p4nc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-691698-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-691698-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-691698-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-691698-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-691698-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  NodeNotReady             9m25s                  node-controller  Node ha-691698-m02 status is now: NodeNotReady
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node ha-691698-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x7 over 2m38s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m3s                   node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           2m3s                   node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           29s                    node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	
	
	Name:               ha-691698-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_33_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:33:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:44:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:44:34 +0000   Mon, 29 Jul 2024 11:44:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:44:34 +0000   Mon, 29 Jul 2024 11:44:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:44:34 +0000   Mon, 29 Jul 2024 11:44:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:44:34 +0000   Mon, 29 Jul 2024 11:44:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-691698-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc0ebb3b7dee46c2bbb6e4b87cde5294
	  System UUID:                dc0ebb3b-7dee-46c2-bbb6-e4b87cde5294
	  Boot ID:                    c62a600b-d52e-44ae-9e86-fc6c4f2e17cd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-72n5l                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-691698-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-n929l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-691698-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-691698-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-vd69n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-691698-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-691698-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-691698-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal   RegisteredNode           2m3s               node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal   RegisteredNode           2m3s               node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	  Normal   NodeNotReady             83s                node-controller  Node ha-691698-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  58s (x2 over 58s)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x2 over 58s)  kubelet          Node ha-691698-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x2 over 58s)  kubelet          Node ha-691698-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 58s                kubelet          Node ha-691698-m03 has been rebooted, boot id: c62a600b-d52e-44ae-9e86-fc6c4f2e17cd
	  Normal   NodeReady                58s                kubelet          Node ha-691698-m03 status is now: NodeReady
	  Normal   RegisteredNode           29s                node-controller  Node ha-691698-m03 event: Registered Node ha-691698-m03 in Controller
	
	
	Name:               ha-691698-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_34_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:44:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:44:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:44:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:44:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:44:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-691698-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 acedffa77bf44161b125b5360bc5ba83
	  System UUID:                acedffa7-7bf4-4161-b125-b5360bc5ba83
	  Boot ID:                    476c2a79-4d31-467a-9808-931b9ef2342d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pknpn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-9k2mb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-691698-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-691698-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m3s               node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   RegisteredNode           2m3s               node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   NodeNotReady             83s                node-controller  Node ha-691698-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           29s                node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-691698-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-691698-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-691698-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-691698-m04 has been rebooted, boot id: 476c2a79-4d31-467a-9808-931b9ef2342d
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-691698-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.170622] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.056672] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055838] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.156855] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147139] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.275583] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.097124] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +4.229544] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063086] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 11:31] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.086846] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.595904] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.192166] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 11:32] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 11:38] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 11:41] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.145977] systemd-fstab-generator[3816]: Ignoring "noauto" option for root device
	[  +0.181688] systemd-fstab-generator[3830]: Ignoring "noauto" option for root device
	[  +0.146077] systemd-fstab-generator[3842]: Ignoring "noauto" option for root device
	[  +0.288873] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +0.804581] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	[Jul29 11:42] kauditd_printk_skb: 225 callbacks suppressed
	[ +19.100063] kauditd_printk_skb: 1 callbacks suppressed
	[ +21.464801] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d] <==
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T11:40:25.71501Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"38b93d7e943acb5d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T11:40:25.71517Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715204Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715234Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715331Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715364Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715396Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715422Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715429Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715439Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715453Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715522Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715611Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715721Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715758Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.718181Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.244:2380"}
	{"level":"info","ts":"2024-07-29T11:40:25.718318Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.244:2380"}
	{"level":"info","ts":"2024-07-29T11:40:25.718358Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-691698","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.244:2380"],"advertise-client-urls":["https://192.168.39.244:2379"]}
	
	
	==> etcd [e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566] <==
	{"level":"warn","ts":"2024-07-29T11:44:00.940895Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:00.940904Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:04.214721Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:04.214843Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:05.941354Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:05.941437Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:08.217221Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:08.217416Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:10.941588Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:10.941751Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:12.220128Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:12.220292Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:15.942337Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:15.942361Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"239f4a9a4c2b2b5d","rtt":"0s","error":"dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:16.222034Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.23:2380/version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T11:44:16.222179Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"239f4a9a4c2b2b5d","error":"Get \"https://192.168.39.23:2380/version\": dial tcp 192.168.39.23:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T11:44:17.131124Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:44:17.131278Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:44:17.131321Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:44:17.156613Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38b93d7e943acb5d","to":"239f4a9a4c2b2b5d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T11:44:17.156904Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:44:17.183774Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38b93d7e943acb5d","to":"239f4a9a4c2b2b5d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T11:44:17.18402Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:44:23.178046Z","caller":"traceutil/trace.go:171","msg":"trace[83441966] transaction","detail":"{read_only:false; response_revision:2495; number_of_response:1; }","duration":"119.457875ms","start":"2024-07-29T11:44:23.05857Z","end":"2024-07-29T11:44:23.178028Z","steps":["trace[83441966] 'process raft request'  (duration: 119.232966ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:45:00.698109Z","caller":"traceutil/trace.go:171","msg":"trace[1338971810] transaction","detail":"{read_only:false; response_revision:2613; number_of_response:1; }","duration":"109.374913ms","start":"2024-07-29T11:45:00.588506Z","end":"2024-07-29T11:45:00.697881Z","steps":["trace[1338971810] 'process raft request'  (duration: 109.234588ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:45:02 up 14 min,  0 users,  load average: 0.09, 0.25, 0.19
	Linux ha-691698 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756] <==
	I0729 11:39:49.465783       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:39:59.465811       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:39:59.465856       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:39:59.465995       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:39:59.466015       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:39:59.466065       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:39:59.466081       1 main.go:299] handling current node
	I0729 11:39:59.466093       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:39:59.466098       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:40:09.457348       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:40:09.457395       1 main.go:299] handling current node
	I0729 11:40:09.457415       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:40:09.457420       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:40:09.457559       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:40:09.457580       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:40:09.457633       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:40:09.457638       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:40:19.457732       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:40:19.457835       1 main.go:299] handling current node
	I0729 11:40:19.457864       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:40:19.457904       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:40:19.458040       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:40:19.458141       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:40:19.458256       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:40:19.458310       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46] <==
	I0729 11:44:30.678799       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:44:40.680831       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:44:40.680962       1 main.go:299] handling current node
	I0729 11:44:40.680997       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:44:40.681018       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:44:40.681152       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:44:40.681176       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:44:40.681295       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:44:40.681329       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:44:50.679750       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:44:50.679796       1 main.go:299] handling current node
	I0729 11:44:50.679813       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:44:50.679818       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:44:50.679954       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:44:50.679975       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:44:50.680023       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:44:50.680039       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:45:00.677586       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:45:00.677745       1 main.go:299] handling current node
	I0729 11:45:00.677780       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:45:00.677834       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:45:00.677996       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:45:00.678049       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:45:00.678156       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:45:00.678200       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97] <==
	I0729 11:42:00.584516       1 options.go:221] external host was not specified, using 192.168.39.244
	I0729 11:42:00.612118       1 server.go:148] Version: v1.30.3
	I0729 11:42:00.612193       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:42:01.087725       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 11:42:01.100736       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 11:42:01.107341       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 11:42:01.109708       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 11:42:01.109972       1 instance.go:299] Using reconciler: lease
	W0729 11:42:21.083928       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 11:42:21.083972       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 11:42:21.111562       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f] <==
	I0729 11:42:46.280343       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 11:42:46.281874       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 11:42:46.400896       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 11:42:46.401018       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 11:42:46.401122       1 aggregator.go:165] initial CRD sync complete...
	I0729 11:42:46.401154       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 11:42:46.401161       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 11:42:46.402402       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 11:42:46.402488       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 11:42:46.434646       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 11:42:46.438506       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 11:42:46.438533       1 policy_source.go:224] refreshing policies
	I0729 11:42:46.476750       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 11:42:46.476883       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:42:46.477293       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:42:46.485234       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0729 11:42:46.495387       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23]
	I0729 11:42:46.496929       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:42:46.503392       1 cache.go:39] Caches are synced for autoregister controller
	I0729 11:42:46.508850       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:42:46.512874       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0729 11:42:46.513738       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 11:42:47.284092       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 11:42:48.037720       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.244]
	W0729 11:43:08.040330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244 192.168.39.5]
	
	
	==> kube-controller-manager [5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4] <==
	I0729 11:42:01.063154       1 serving.go:380] Generated self-signed cert in-memory
	I0729 11:42:01.383871       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 11:42:01.383910       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:42:01.387545       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 11:42:01.387843       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:42:01.388100       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:42:01.388239       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 11:42:22.125173       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.244:8443/healthz\": dial tcp 192.168.39.244:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e] <==
	I0729 11:42:59.416380       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 11:42:59.418224       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 11:42:59.432267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.407264ms"
	I0729 11:42:59.432578       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.89µs"
	I0729 11:42:59.436457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.406845ms"
	I0729 11:42:59.436559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.153µs"
	I0729 11:42:59.482893       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 11:42:59.502837       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 11:42:59.556269       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:42:59.604508       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 11:42:59.614113       1 shared_informer.go:320] Caches are synced for disruption
	I0729 11:42:59.620994       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 11:43:00.011745       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:43:00.011781       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 11:43:00.037799       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 11:43:00.065097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="24.990179ms"
	I0729 11:43:00.065258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.512µs"
	I0729 11:43:30.073306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.244152ms"
	I0729 11:43:30.073420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.982µs"
	I0729 11:43:39.629590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.342864ms"
	I0729 11:43:39.629804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.379µs"
	I0729 11:44:04.939192       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.605µs"
	I0729 11:44:21.132190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.385342ms"
	I0729 11:44:21.133445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.188µs"
	I0729 11:44:54.000582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-691698-m04"
	
	
	==> kube-proxy [2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323] <==
	E0729 11:39:05.961411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:09.033003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:09.033084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:09.033134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:09.033150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:09.033284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:09.033323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:15.497092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:15.497152       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:15.497225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:15.497282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:15.497342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:15.497373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:24.715323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:24.715454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:27.786388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:27.786472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:27.786535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:27.786623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:49.289201       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:49.289291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:49.289363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:49.289378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:52.362271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:52.362324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397] <==
	E0729 11:42:25.961946       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-691698\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 11:42:44.394296       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-691698\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 11:42:44.394491       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0729 11:42:44.437047       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:42:44.437152       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:42:44.437183       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:42:44.439807       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:42:44.440030       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:42:44.440186       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:42:44.441202       1 config.go:192] "Starting service config controller"
	I0729 11:42:44.441291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:42:44.441345       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:42:44.441362       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:42:44.442163       1 config.go:319] "Starting node config controller"
	I0729 11:42:44.442204       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0729 11:42:47.465040       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 11:42:47.465846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:42:47.469903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:42:47.465975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:42:47.469966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:42:47.466041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:42:47.470059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 11:42:48.641809       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:42:48.642048       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:42:49.042265       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0] <==
	W0729 11:40:23.602438       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:40:23.602469       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:40:23.621838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:40:23.621882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:40:23.759379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:23.759424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:40:23.768820       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:40:23.769024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:40:23.920537       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:40:23.920648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:40:24.119334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:40:24.119403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:40:24.181831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:40:24.181895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:40:24.213357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:24.213436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:40:24.367322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:40:24.367430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:40:24.380951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:24.380998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:40:24.408611       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:40:24.408737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:40:24.445167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:24.445291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:25.611135       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07] <==
	W0729 11:42:46.373744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:42:46.373773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:42:46.373837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:42:46.373862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:42:46.373903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:42:46.373927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:42:46.373968       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:42:46.373993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:42:46.374023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:42:46.374047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:42:46.374101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:42:46.374125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:42:46.374166       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:42:46.374189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:42:46.374221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:42:46.374246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:42:46.374288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:42:46.374313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:42:46.374349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:42:46.374370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:42:46.374426       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:42:46.374449       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:42:46.401643       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:42:46.404765       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:42:59.741945       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:43:29 ha-691698 kubelet[1382]: E0729 11:43:29.544934    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:43:43 ha-691698 kubelet[1382]: I0729 11:43:43.545323    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:43:43 ha-691698 kubelet[1382]: E0729 11:43:43.545617    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:43:54 ha-691698 kubelet[1382]: I0729 11:43:54.544573    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:43:54 ha-691698 kubelet[1382]: E0729 11:43:54.545129    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:01 ha-691698 kubelet[1382]: E0729 11:44:01.567204    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:44:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:44:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:44:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:44:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:44:07 ha-691698 kubelet[1382]: I0729 11:44:07.544339    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:07 ha-691698 kubelet[1382]: E0729 11:44:07.544655    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:20 ha-691698 kubelet[1382]: I0729 11:44:20.544320    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:20 ha-691698 kubelet[1382]: E0729 11:44:20.544875    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:35 ha-691698 kubelet[1382]: I0729 11:44:35.544146    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:35 ha-691698 kubelet[1382]: E0729 11:44:35.544557    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:47 ha-691698 kubelet[1382]: I0729 11:44:47.545944    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:47 ha-691698 kubelet[1382]: E0729 11:44:47.548932    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:58 ha-691698 kubelet[1382]: I0729 11:44:58.544271    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:58 ha-691698 kubelet[1382]: I0729 11:44:58.673476    1382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-691698" podStartSLOduration=93.673451886 podStartE2EDuration="1m33.673451886s" podCreationTimestamp="2024-07-29 11:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 11:43:35.915174377 +0000 UTC m=+754.529375109" watchObservedRunningTime="2024-07-29 11:44:58.673451886 +0000 UTC m=+837.287652619"
	Jul 29 11:45:01 ha-691698 kubelet[1382]: E0729 11:45:01.568181    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:45:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:45:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:45:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:45:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:45:01.138072  143587 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19336-113730/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-691698 -n ha-691698
helpers_test.go:261: (dbg) Run:  kubectl --context ha-691698 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (400.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 stop -v=7 --alsologtostderr: exit status 82 (2m0.464504787s)

                                                
                                                
-- stdout --
	* Stopping node "ha-691698-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:45:20.638210  143997 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:45:20.638353  143997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:45:20.638362  143997 out.go:304] Setting ErrFile to fd 2...
	I0729 11:45:20.638367  143997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:45:20.638541  143997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:45:20.638764  143997 out.go:298] Setting JSON to false
	I0729 11:45:20.638847  143997 mustload.go:65] Loading cluster: ha-691698
	I0729 11:45:20.639184  143997 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:45:20.639268  143997 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:45:20.639442  143997 mustload.go:65] Loading cluster: ha-691698
	I0729 11:45:20.639567  143997 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:45:20.639601  143997 stop.go:39] StopHost: ha-691698-m04
	I0729 11:45:20.640220  143997 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:45:20.640292  143997 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:45:20.655629  143997 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39611
	I0729 11:45:20.656112  143997 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:45:20.656708  143997 main.go:141] libmachine: Using API Version  1
	I0729 11:45:20.656733  143997 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:45:20.657170  143997 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:45:20.659768  143997 out.go:177] * Stopping node "ha-691698-m04"  ...
	I0729 11:45:20.661470  143997 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 11:45:20.661514  143997 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:45:20.661811  143997 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 11:45:20.661843  143997 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:45:20.665135  143997 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:45:20.665575  143997 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:44:48 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:45:20.665604  143997 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:45:20.665763  143997 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:45:20.665923  143997 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:45:20.666081  143997 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:45:20.666222  143997 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	I0729 11:45:20.747541  143997 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 11:45:20.799333  143997 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 11:45:20.850966  143997 main.go:141] libmachine: Stopping "ha-691698-m04"...
	I0729 11:45:20.851019  143997 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:45:20.852668  143997 main.go:141] libmachine: (ha-691698-m04) Calling .Stop
	I0729 11:45:20.856060  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 0/120
	I0729 11:45:21.857445  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 1/120
	I0729 11:45:22.859490  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 2/120
	I0729 11:45:23.860836  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 3/120
	I0729 11:45:24.862801  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 4/120
	I0729 11:45:25.864983  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 5/120
	I0729 11:45:26.866347  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 6/120
	I0729 11:45:27.867689  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 7/120
	I0729 11:45:28.868952  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 8/120
	I0729 11:45:29.870359  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 9/120
	I0729 11:45:30.871974  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 10/120
	I0729 11:45:31.873929  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 11/120
	I0729 11:45:32.875330  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 12/120
	I0729 11:45:33.876844  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 13/120
	I0729 11:45:34.879128  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 14/120
	I0729 11:45:35.881078  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 15/120
	I0729 11:45:36.882323  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 16/120
	I0729 11:45:37.883791  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 17/120
	I0729 11:45:38.885081  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 18/120
	I0729 11:45:39.886450  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 19/120
	I0729 11:45:40.887873  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 20/120
	I0729 11:45:41.889235  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 21/120
	I0729 11:45:42.890645  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 22/120
	I0729 11:45:43.892504  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 23/120
	I0729 11:45:44.893970  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 24/120
	I0729 11:45:45.895825  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 25/120
	I0729 11:45:46.897196  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 26/120
	I0729 11:45:47.898637  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 27/120
	I0729 11:45:48.900496  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 28/120
	I0729 11:45:49.902062  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 29/120
	I0729 11:45:50.904336  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 30/120
	I0729 11:45:51.905803  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 31/120
	I0729 11:45:52.907226  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 32/120
	I0729 11:45:53.908638  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 33/120
	I0729 11:45:54.910023  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 34/120
	I0729 11:45:55.911778  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 35/120
	I0729 11:45:56.913694  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 36/120
	I0729 11:45:57.915137  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 37/120
	I0729 11:45:58.916994  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 38/120
	I0729 11:45:59.918464  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 39/120
	I0729 11:46:00.920735  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 40/120
	I0729 11:46:01.922311  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 41/120
	I0729 11:46:02.924539  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 42/120
	I0729 11:46:03.925995  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 43/120
	I0729 11:46:04.927231  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 44/120
	I0729 11:46:05.929249  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 45/120
	I0729 11:46:06.930608  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 46/120
	I0729 11:46:07.932068  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 47/120
	I0729 11:46:08.934424  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 48/120
	I0729 11:46:09.935869  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 49/120
	I0729 11:46:10.938027  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 50/120
	I0729 11:46:11.939583  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 51/120
	I0729 11:46:12.941329  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 52/120
	I0729 11:46:13.943719  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 53/120
	I0729 11:46:14.945957  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 54/120
	I0729 11:46:15.947490  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 55/120
	I0729 11:46:16.949161  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 56/120
	I0729 11:46:17.951618  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 57/120
	I0729 11:46:18.953277  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 58/120
	I0729 11:46:19.954579  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 59/120
	I0729 11:46:20.956517  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 60/120
	I0729 11:46:21.957895  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 61/120
	I0729 11:46:22.959405  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 62/120
	I0729 11:46:23.960639  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 63/120
	I0729 11:46:24.962174  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 64/120
	I0729 11:46:25.963587  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 65/120
	I0729 11:46:26.964885  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 66/120
	I0729 11:46:27.966296  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 67/120
	I0729 11:46:28.967716  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 68/120
	I0729 11:46:29.969245  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 69/120
	I0729 11:46:30.971367  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 70/120
	I0729 11:46:31.973185  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 71/120
	I0729 11:46:32.974756  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 72/120
	I0729 11:46:33.976127  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 73/120
	I0729 11:46:34.977497  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 74/120
	I0729 11:46:35.979465  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 75/120
	I0729 11:46:36.981434  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 76/120
	I0729 11:46:37.982817  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 77/120
	I0729 11:46:38.984416  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 78/120
	I0729 11:46:39.985734  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 79/120
	I0729 11:46:40.987737  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 80/120
	I0729 11:46:41.989551  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 81/120
	I0729 11:46:42.990707  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 82/120
	I0729 11:46:43.992100  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 83/120
	I0729 11:46:44.993427  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 84/120
	I0729 11:46:45.995302  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 85/120
	I0729 11:46:46.996568  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 86/120
	I0729 11:46:47.997816  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 87/120
	I0729 11:46:48.998952  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 88/120
	I0729 11:46:50.000250  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 89/120
	I0729 11:46:51.002481  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 90/120
	I0729 11:46:52.003823  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 91/120
	I0729 11:46:53.005391  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 92/120
	I0729 11:46:54.006978  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 93/120
	I0729 11:46:55.008898  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 94/120
	I0729 11:46:56.010341  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 95/120
	I0729 11:46:57.012160  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 96/120
	I0729 11:46:58.014165  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 97/120
	I0729 11:46:59.015517  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 98/120
	I0729 11:47:00.016744  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 99/120
	I0729 11:47:01.018930  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 100/120
	I0729 11:47:02.020428  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 101/120
	I0729 11:47:03.021914  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 102/120
	I0729 11:47:04.023432  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 103/120
	I0729 11:47:05.024787  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 104/120
	I0729 11:47:06.026762  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 105/120
	I0729 11:47:07.028457  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 106/120
	I0729 11:47:08.030022  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 107/120
	I0729 11:47:09.031638  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 108/120
	I0729 11:47:10.033140  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 109/120
	I0729 11:47:11.035339  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 110/120
	I0729 11:47:12.036992  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 111/120
	I0729 11:47:13.038521  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 112/120
	I0729 11:47:14.040310  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 113/120
	I0729 11:47:15.041689  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 114/120
	I0729 11:47:16.043754  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 115/120
	I0729 11:47:17.045207  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 116/120
	I0729 11:47:18.046613  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 117/120
	I0729 11:47:19.049028  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 118/120
	I0729 11:47:20.050417  143997 main.go:141] libmachine: (ha-691698-m04) Waiting for machine to stop 119/120
	I0729 11:47:21.051164  143997 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 11:47:21.051233  143997 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 11:47:21.053325  143997 out.go:177] 
	W0729 11:47:21.054653  143997 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 11:47:21.054672  143997 out.go:239] * 
	* 
	W0729 11:47:21.056909  143997 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:47:21.058208  143997 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-691698 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr: exit status 3 (19.002482199s)

                                                
                                                
-- stdout --
	ha-691698
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-691698-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:47:21.103686  144433 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:47:21.103955  144433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:47:21.103964  144433 out.go:304] Setting ErrFile to fd 2...
	I0729 11:47:21.103968  144433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:47:21.104139  144433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:47:21.104298  144433 out.go:298] Setting JSON to false
	I0729 11:47:21.104324  144433 mustload.go:65] Loading cluster: ha-691698
	I0729 11:47:21.104383  144433 notify.go:220] Checking for updates...
	I0729 11:47:21.104734  144433 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:47:21.104751  144433 status.go:255] checking status of ha-691698 ...
	I0729 11:47:21.105176  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.105240  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.132806  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0729 11:47:21.133358  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.134087  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.134124  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.134518  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.134683  144433 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:47:21.136255  144433 status.go:330] ha-691698 host status = "Running" (err=<nil>)
	I0729 11:47:21.136275  144433 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:47:21.136587  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.136631  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.152319  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
	I0729 11:47:21.152774  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.153341  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.153375  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.153726  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.153968  144433 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:47:21.156839  144433 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:47:21.157319  144433 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:47:21.157346  144433 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:47:21.157459  144433 host.go:66] Checking if "ha-691698" exists ...
	I0729 11:47:21.157737  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.157779  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.173437  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0729 11:47:21.173855  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.174373  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.174407  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.174755  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.174962  144433 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:47:21.175232  144433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:47:21.175259  144433 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:47:21.178156  144433 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:47:21.178685  144433 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:47:21.178712  144433 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:47:21.178864  144433 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:47:21.179049  144433 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:47:21.179224  144433 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:47:21.179374  144433 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:47:21.260173  144433 ssh_runner.go:195] Run: systemctl --version
	I0729 11:47:21.266017  144433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:47:21.280227  144433 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:47:21.280256  144433 api_server.go:166] Checking apiserver status ...
	I0729 11:47:21.280291  144433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:21.292947  144433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5075/cgroup
	W0729 11:47:21.301351  144433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5075/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:21.301403  144433 ssh_runner.go:195] Run: ls
	I0729 11:47:21.305425  144433 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:47:21.309366  144433 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:47:21.309392  144433 status.go:422] ha-691698 apiserver status = Running (err=<nil>)
	I0729 11:47:21.309408  144433 status.go:257] ha-691698 status: &{Name:ha-691698 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:47:21.309434  144433 status.go:255] checking status of ha-691698-m02 ...
	I0729 11:47:21.309826  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.309855  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.325055  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0729 11:47:21.325528  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.326086  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.326113  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.326404  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.326598  144433 main.go:141] libmachine: (ha-691698-m02) Calling .GetState
	I0729 11:47:21.328074  144433 status.go:330] ha-691698-m02 host status = "Running" (err=<nil>)
	I0729 11:47:21.328093  144433 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:47:21.328445  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.328491  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.343572  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0729 11:47:21.344056  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.344553  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.344574  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.344867  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.345081  144433 main.go:141] libmachine: (ha-691698-m02) Calling .GetIP
	I0729 11:47:21.347783  144433 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:47:21.348198  144433 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:42:11 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:47:21.348223  144433 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:47:21.348387  144433 host.go:66] Checking if "ha-691698-m02" exists ...
	I0729 11:47:21.348778  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.348810  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.364380  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I0729 11:47:21.364806  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.365289  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.365316  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.365638  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.365828  144433 main.go:141] libmachine: (ha-691698-m02) Calling .DriverName
	I0729 11:47:21.366049  144433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:47:21.366071  144433 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHHostname
	I0729 11:47:21.368558  144433 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:47:21.368943  144433 main.go:141] libmachine: (ha-691698-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b5:f9", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:42:11 +0000 UTC Type:0 Mac:52:54:00:d9:b5:f9 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:ha-691698-m02 Clientid:01:52:54:00:d9:b5:f9}
	I0729 11:47:21.368995  144433 main.go:141] libmachine: (ha-691698-m02) DBG | domain ha-691698-m02 has defined IP address 192.168.39.5 and MAC address 52:54:00:d9:b5:f9 in network mk-ha-691698
	I0729 11:47:21.369207  144433 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHPort
	I0729 11:47:21.369380  144433 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHKeyPath
	I0729 11:47:21.369536  144433 main.go:141] libmachine: (ha-691698-m02) Calling .GetSSHUsername
	I0729 11:47:21.369674  144433 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m02/id_rsa Username:docker}
	I0729 11:47:21.451936  144433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 11:47:21.466131  144433 kubeconfig.go:125] found "ha-691698" server: "https://192.168.39.254:8443"
	I0729 11:47:21.466163  144433 api_server.go:166] Checking apiserver status ...
	I0729 11:47:21.466208  144433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:47:21.478659  144433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1409/cgroup
	W0729 11:47:21.487654  144433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1409/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:47:21.487704  144433 ssh_runner.go:195] Run: ls
	I0729 11:47:21.492076  144433 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 11:47:21.496156  144433 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 11:47:21.496175  144433 status.go:422] ha-691698-m02 apiserver status = Running (err=<nil>)
	I0729 11:47:21.496186  144433 status.go:257] ha-691698-m02 status: &{Name:ha-691698-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 11:47:21.496209  144433 status.go:255] checking status of ha-691698-m04 ...
	I0729 11:47:21.496491  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.496516  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.511499  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45047
	I0729 11:47:21.511995  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.512543  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.512566  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.512873  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.513112  144433 main.go:141] libmachine: (ha-691698-m04) Calling .GetState
	I0729 11:47:21.514549  144433 status.go:330] ha-691698-m04 host status = "Running" (err=<nil>)
	I0729 11:47:21.514570  144433 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:47:21.514890  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.514925  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.529651  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
	I0729 11:47:21.530031  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.530483  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.530503  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.530813  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.531016  144433 main.go:141] libmachine: (ha-691698-m04) Calling .GetIP
	I0729 11:47:21.533301  144433 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:47:21.533713  144433 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:44:48 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:47:21.533735  144433 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:47:21.533852  144433 host.go:66] Checking if "ha-691698-m04" exists ...
	I0729 11:47:21.534217  144433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:47:21.534255  144433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:47:21.548690  144433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46663
	I0729 11:47:21.549113  144433 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:47:21.549552  144433 main.go:141] libmachine: Using API Version  1
	I0729 11:47:21.549580  144433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:47:21.549855  144433 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:47:21.550043  144433 main.go:141] libmachine: (ha-691698-m04) Calling .DriverName
	I0729 11:47:21.550235  144433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:47:21.550254  144433 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHHostname
	I0729 11:47:21.552567  144433 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:47:21.553072  144433 main.go:141] libmachine: (ha-691698-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:3b:0c", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:44:48 +0000 UTC Type:0 Mac:52:54:00:83:3b:0c Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-691698-m04 Clientid:01:52:54:00:83:3b:0c}
	I0729 11:47:21.553108  144433 main.go:141] libmachine: (ha-691698-m04) DBG | domain ha-691698-m04 has defined IP address 192.168.39.84 and MAC address 52:54:00:83:3b:0c in network mk-ha-691698
	I0729 11:47:21.553221  144433 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHPort
	I0729 11:47:21.553400  144433 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHKeyPath
	I0729 11:47:21.553554  144433 main.go:141] libmachine: (ha-691698-m04) Calling .GetSSHUsername
	I0729 11:47:21.553657  144433 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698-m04/id_rsa Username:docker}
	W0729 11:47:40.061205  144433 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.84:22: connect: no route to host
	W0729 11:47:40.061327  144433 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0729 11:47:40.061344  144433 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	I0729 11:47:40.061353  144433 status.go:257] ha-691698-m04 status: &{Name:ha-691698-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:47:40.061383  144433 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-691698 -n ha-691698
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-691698 logs -n 25: (1.594473005s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m04 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp testdata/cp-test.txt                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698:/home/docker/cp-test_ha-691698-m04_ha-691698.txt                       |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698 sudo cat                                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698.txt                                 |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m02:/home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m02 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m03:/home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n                                                                 | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | ha-691698-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-691698 ssh -n ha-691698-m03 sudo cat                                          | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC | 29 Jul 24 11:34 UTC |
	|         | /home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-691698 node stop m02 -v=7                                                     | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-691698 node start m02 -v=7                                                    | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-691698 -v=7                                                           | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-691698 -v=7                                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-691698 --wait=true -v=7                                                    | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:40 UTC | 29 Jul 24 11:45 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-691698                                                                | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:45 UTC |                     |
	| node    | ha-691698 node delete m03 -v=7                                                   | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:45 UTC | 29 Jul 24 11:45 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-691698 stop -v=7                                                              | ha-691698 | jenkins | v1.33.1 | 29 Jul 24 11:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:40:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:40:24.776357  142228 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:40:24.776768  142228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:40:24.776781  142228 out.go:304] Setting ErrFile to fd 2...
	I0729 11:40:24.776788  142228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:40:24.777255  142228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:40:24.777987  142228 out.go:298] Setting JSON to false
	I0729 11:40:24.779118  142228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4976,"bootTime":1722248249,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:40:24.779187  142228 start.go:139] virtualization: kvm guest
	I0729 11:40:24.781387  142228 out.go:177] * [ha-691698] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:40:24.782985  142228 notify.go:220] Checking for updates...
	I0729 11:40:24.783002  142228 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 11:40:24.784449  142228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:40:24.785837  142228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:40:24.787286  142228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:40:24.788593  142228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:40:24.790034  142228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:40:24.791895  142228 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:40:24.792024  142228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:40:24.792472  142228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:40:24.792552  142228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:40:24.808026  142228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44009
	I0729 11:40:24.808444  142228 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:40:24.809076  142228 main.go:141] libmachine: Using API Version  1
	I0729 11:40:24.809106  142228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:40:24.809411  142228 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:40:24.809591  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:40:24.847051  142228 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:40:24.848333  142228 start.go:297] selected driver: kvm2
	I0729 11:40:24.848348  142228 start.go:901] validating driver "kvm2" against &{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:40:24.848498  142228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:40:24.848905  142228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:40:24.849014  142228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 11:40:24.866887  142228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 11:40:24.867607  142228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:40:24.867656  142228 cni.go:84] Creating CNI manager for ""
	I0729 11:40:24.867663  142228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 11:40:24.867728  142228 start.go:340] cluster config:
	{Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:40:24.867912  142228 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:40:24.870070  142228 out.go:177] * Starting "ha-691698" primary control-plane node in "ha-691698" cluster
	I0729 11:40:24.871246  142228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:40:24.871284  142228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 11:40:24.871298  142228 cache.go:56] Caching tarball of preloaded images
	I0729 11:40:24.871377  142228 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 11:40:24.871390  142228 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 11:40:24.871551  142228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/config.json ...
	I0729 11:40:24.871793  142228 start.go:360] acquireMachinesLock for ha-691698: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 11:40:24.871838  142228 start.go:364] duration metric: took 25.882µs to acquireMachinesLock for "ha-691698"
	I0729 11:40:24.871850  142228 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:40:24.871856  142228 fix.go:54] fixHost starting: 
	I0729 11:40:24.872198  142228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:40:24.872236  142228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:40:24.887142  142228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0729 11:40:24.887585  142228 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:40:24.888041  142228 main.go:141] libmachine: Using API Version  1
	I0729 11:40:24.888067  142228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:40:24.888429  142228 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:40:24.888602  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:40:24.888753  142228 main.go:141] libmachine: (ha-691698) Calling .GetState
	I0729 11:40:24.890504  142228 fix.go:112] recreateIfNeeded on ha-691698: state=Running err=<nil>
	W0729 11:40:24.890525  142228 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:40:24.893184  142228 out.go:177] * Updating the running kvm2 "ha-691698" VM ...
	I0729 11:40:24.894638  142228 machine.go:94] provisionDockerMachine start ...
	I0729 11:40:24.894660  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:40:24.894892  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:24.897375  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:24.897825  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:24.897856  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:24.898018  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:24.898197  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:24.898344  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:24.898480  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:24.898661  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:24.898896  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:24.898911  142228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:40:25.005231  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698
	
	I0729 11:40:25.005260  142228 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:40:25.005552  142228 buildroot.go:166] provisioning hostname "ha-691698"
	I0729 11:40:25.005574  142228 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:40:25.005779  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.008541  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.008878  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.008907  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.009069  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.009262  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.009422  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.009522  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.009646  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:25.009857  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:25.009876  142228 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-691698 && echo "ha-691698" | sudo tee /etc/hostname
	I0729 11:40:25.131528  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-691698
	
	I0729 11:40:25.131580  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.134273  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.134739  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.134769  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.134954  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.135172  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.135382  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.135545  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.135689  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:25.135881  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:25.135903  142228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-691698' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-691698/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-691698' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:40:25.241772  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:40:25.241800  142228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 11:40:25.241818  142228 buildroot.go:174] setting up certificates
	I0729 11:40:25.241828  142228 provision.go:84] configureAuth start
	I0729 11:40:25.241836  142228 main.go:141] libmachine: (ha-691698) Calling .GetMachineName
	I0729 11:40:25.242136  142228 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:40:25.245000  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.245437  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.245463  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.245642  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.248013  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.248332  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.248356  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.248462  142228 provision.go:143] copyHostCerts
	I0729 11:40:25.248493  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:40:25.248542  142228 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 11:40:25.248553  142228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 11:40:25.248630  142228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 11:40:25.248744  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:40:25.248775  142228 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 11:40:25.248785  142228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 11:40:25.248828  142228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 11:40:25.248890  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:40:25.248912  142228 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 11:40:25.248921  142228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 11:40:25.248953  142228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 11:40:25.249035  142228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.ha-691698 san=[127.0.0.1 192.168.39.244 ha-691698 localhost minikube]
	I0729 11:40:25.324097  142228 provision.go:177] copyRemoteCerts
	I0729 11:40:25.324177  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:40:25.324208  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.327170  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.327563  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.327597  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.327753  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.327970  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.328176  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.328340  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:40:25.410895  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 11:40:25.410998  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:40:25.437258  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 11:40:25.437330  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 11:40:25.462358  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 11:40:25.462423  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 11:40:25.487750  142228 provision.go:87] duration metric: took 245.906243ms to configureAuth
	I0729 11:40:25.487785  142228 buildroot.go:189] setting minikube options for container-runtime
	I0729 11:40:25.488085  142228 config.go:182] Loaded profile config "ha-691698": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:40:25.488169  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:40:25.490646  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.490997  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:40:25.491024  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:40:25.491204  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:40:25.491412  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.491594  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:40:25.491728  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:40:25.491857  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:40:25.492022  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:40:25.492037  142228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 11:41:56.288822  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 11:41:56.288856  142228 machine.go:97] duration metric: took 1m31.394204468s to provisionDockerMachine
	I0729 11:41:56.288870  142228 start.go:293] postStartSetup for "ha-691698" (driver="kvm2")
	I0729 11:41:56.288882  142228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:41:56.288899  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.289266  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:41:56.289297  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.292548  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.292891  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.292921  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.293127  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.293338  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.293488  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.293612  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:41:56.375916  142228 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:41:56.380305  142228 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 11:41:56.380341  142228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 11:41:56.380415  142228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 11:41:56.380488  142228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 11:41:56.380499  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 11:41:56.380588  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:41:56.390445  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:41:56.414898  142228 start.go:296] duration metric: took 126.009826ms for postStartSetup
	I0729 11:41:56.414963  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.415323  142228 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 11:41:56.415352  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.418237  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.418589  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.418623  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.418827  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.419035  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.419220  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.419380  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	W0729 11:41:56.503390  142228 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 11:41:56.503423  142228 fix.go:56] duration metric: took 1m31.631562238s for fixHost
	I0729 11:41:56.503450  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.506250  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.506656  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.506677  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.506870  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.507069  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.507259  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.507400  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.507583  142228 main.go:141] libmachine: Using SSH client type: native
	I0729 11:41:56.507751  142228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0729 11:41:56.507760  142228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 11:41:56.614056  142228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722253316.587985785
	
	I0729 11:41:56.614085  142228 fix.go:216] guest clock: 1722253316.587985785
	I0729 11:41:56.614095  142228 fix.go:229] Guest: 2024-07-29 11:41:56.587985785 +0000 UTC Remote: 2024-07-29 11:41:56.503434193 +0000 UTC m=+91.765146958 (delta=84.551592ms)
	I0729 11:41:56.614123  142228 fix.go:200] guest clock delta is within tolerance: 84.551592ms
	I0729 11:41:56.614131  142228 start.go:83] releasing machines lock for "ha-691698", held for 1m31.7422846s
	I0729 11:41:56.614158  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.614457  142228 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:41:56.617095  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.617554  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.617580  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.617722  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.618341  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.618536  142228 main.go:141] libmachine: (ha-691698) Calling .DriverName
	I0729 11:41:56.618659  142228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:41:56.618721  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.618777  142228 ssh_runner.go:195] Run: cat /version.json
	I0729 11:41:56.618803  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHHostname
	I0729 11:41:56.621319  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.621422  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.621684  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.621709  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.621884  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:56.621904  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.621908  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:56.622091  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHPort
	I0729 11:41:56.622150  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.622253  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHKeyPath
	I0729 11:41:56.622320  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.622367  142228 main.go:141] libmachine: (ha-691698) Calling .GetSSHUsername
	I0729 11:41:56.622470  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:41:56.622476  142228 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/ha-691698/id_rsa Username:docker}
	I0729 11:41:56.733830  142228 ssh_runner.go:195] Run: systemctl --version
	I0729 11:41:56.743088  142228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 11:41:56.907039  142228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 11:41:56.913007  142228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 11:41:56.913075  142228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:41:56.922959  142228 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 11:41:56.922986  142228 start.go:495] detecting cgroup driver to use...
	I0729 11:41:56.923051  142228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 11:41:56.940108  142228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 11:41:56.954882  142228 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:41:56.954961  142228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:41:56.969223  142228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:41:56.983902  142228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:41:57.130869  142228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:41:57.278208  142228 docker.go:233] disabling docker service ...
	I0729 11:41:57.278293  142228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:41:57.295736  142228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:41:57.310177  142228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:41:57.457062  142228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:41:57.603860  142228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:41:57.618170  142228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:41:57.637555  142228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 11:41:57.637643  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.648700  142228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 11:41:57.648763  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.659809  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.670683  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.681281  142228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:41:57.692558  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.703876  142228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.714676  142228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 11:41:57.725026  142228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:41:57.734476  142228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:41:57.744043  142228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:41:57.896926  142228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 11:41:58.175176  142228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 11:41:58.175264  142228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 11:41:58.180455  142228 start.go:563] Will wait 60s for crictl version
	I0729 11:41:58.180528  142228 ssh_runner.go:195] Run: which crictl
	I0729 11:41:58.184306  142228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:41:58.221661  142228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 11:41:58.221749  142228 ssh_runner.go:195] Run: crio --version
	I0729 11:41:58.249977  142228 ssh_runner.go:195] Run: crio --version
	I0729 11:41:58.282116  142228 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 11:41:58.283532  142228 main.go:141] libmachine: (ha-691698) Calling .GetIP
	I0729 11:41:58.286259  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:58.286626  142228 main.go:141] libmachine: (ha-691698) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:22:44", ip: ""} in network mk-ha-691698: {Iface:virbr1 ExpiryTime:2024-07-29 12:30:32 +0000 UTC Type:0 Mac:52:54:00:5a:22:44 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-691698 Clientid:01:52:54:00:5a:22:44}
	I0729 11:41:58.286661  142228 main.go:141] libmachine: (ha-691698) DBG | domain ha-691698 has defined IP address 192.168.39.244 and MAC address 52:54:00:5a:22:44 in network mk-ha-691698
	I0729 11:41:58.286874  142228 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 11:41:58.291838  142228 kubeadm.go:883] updating cluster {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:41:58.291978  142228 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 11:41:58.292022  142228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:41:58.335821  142228 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:41:58.335847  142228 crio.go:433] Images already preloaded, skipping extraction
	I0729 11:41:58.335896  142228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:41:58.371463  142228 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 11:41:58.371491  142228 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:41:58.371505  142228 kubeadm.go:934] updating node { 192.168.39.244 8443 v1.30.3 crio true true} ...
	I0729 11:41:58.371640  142228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-691698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:41:58.371728  142228 ssh_runner.go:195] Run: crio config
	I0729 11:41:58.421750  142228 cni.go:84] Creating CNI manager for ""
	I0729 11:41:58.421778  142228 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 11:41:58.421790  142228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:41:58.421824  142228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-691698 NodeName:ha-691698 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:41:58.422000  142228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-691698"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:41:58.422024  142228 kube-vip.go:115] generating kube-vip config ...
	I0729 11:41:58.422077  142228 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 11:41:58.433584  142228 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 11:41:58.433737  142228 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 11:41:58.433805  142228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:41:58.443766  142228 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:41:58.443864  142228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 11:41:58.453558  142228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 11:41:58.473241  142228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:41:58.493642  142228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 11:41:58.513420  142228 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 11:41:58.536252  142228 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 11:41:58.540664  142228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:41:58.695769  142228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:41:58.710227  142228 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698 for IP: 192.168.39.244
	I0729 11:41:58.710254  142228 certs.go:194] generating shared ca certs ...
	I0729 11:41:58.710270  142228 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:41:58.710437  142228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 11:41:58.710535  142228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 11:41:58.710550  142228 certs.go:256] generating profile certs ...
	I0729 11:41:58.710627  142228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/client.key
	I0729 11:41:58.710656  142228 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36
	I0729 11:41:58.710668  142228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244 192.168.39.5 192.168.39.23 192.168.39.254]
	I0729 11:41:58.871227  142228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36 ...
	I0729 11:41:58.871262  142228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36: {Name:mkdaac54e51c3106526d4dc2fc72bc59c935ccf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:41:58.871465  142228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36 ...
	I0729 11:41:58.871482  142228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36: {Name:mkef0f29cf9214d3068dd6b1e248f6f75204c16b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:41:58.871585  142228 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt.a5028b36 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt
	I0729 11:41:58.871744  142228 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key.a5028b36 -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key
	I0729 11:41:58.871883  142228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key
	I0729 11:41:58.871899  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 11:41:58.871914  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 11:41:58.871928  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 11:41:58.871942  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 11:41:58.871954  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 11:41:58.871966  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 11:41:58.871978  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 11:41:58.871991  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 11:41:58.872037  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 11:41:58.872064  142228 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 11:41:58.872073  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 11:41:58.872092  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 11:41:58.872115  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:41:58.872140  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 11:41:58.872176  142228 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 11:41:58.872201  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 11:41:58.872214  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 11:41:58.872227  142228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:41:58.872871  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:41:58.923682  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 11:41:59.028122  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:41:59.102997  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 11:41:59.291661  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 11:41:59.498211  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 11:41:59.702131  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:41:59.890939  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/ha-691698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 11:42:00.008589  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 11:42:00.150765  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 11:42:00.247320  142228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:42:00.283964  142228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:42:00.307915  142228 ssh_runner.go:195] Run: openssl version
	I0729 11:42:00.316193  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 11:42:00.335882  142228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 11:42:00.345245  142228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 11:42:00.345319  142228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 11:42:00.363844  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 11:42:00.395775  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 11:42:00.423321  142228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 11:42:00.431700  142228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 11:42:00.431773  142228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 11:42:00.440624  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:42:00.455032  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:42:00.468899  142228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:42:00.475414  142228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:42:00.475482  142228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:42:00.483106  142228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:42:00.496646  142228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:42:00.503675  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:42:00.509769  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:42:00.517537  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:42:00.525966  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:42:00.534382  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:42:00.542592  142228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:42:00.550650  142228 kubeadm.go:392] StartCluster: {Name:ha-691698 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-691698 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.23 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.84 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:42:00.550776  142228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 11:42:00.550825  142228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:42:00.603481  142228 cri.go:89] found id: "29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843"
	I0729 11:42:00.603514  142228 cri.go:89] found id: "ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11"
	I0729 11:42:00.603521  142228 cri.go:89] found id: "51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46"
	I0729 11:42:00.603531  142228 cri.go:89] found id: "05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97"
	I0729 11:42:00.603536  142228 cri.go:89] found id: "e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566"
	I0729 11:42:00.603540  142228 cri.go:89] found id: "f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397"
	I0729 11:42:00.603544  142228 cri.go:89] found id: "24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07"
	I0729 11:42:00.603548  142228 cri.go:89] found id: "5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4"
	I0729 11:42:00.603552  142228 cri.go:89] found id: "cfc6bb6aa4f7b3d7c9249429bd4afd574bc9d92d4bb437c37d3259df42dee674"
	I0729 11:42:00.603560  142228 cri.go:89] found id: "7d15eebdab78d379c854debcbf3c7c75ebc774b65df62b203aa7b6aafcd4c7ae"
	I0729 11:42:00.603564  142228 cri.go:89] found id: "3e163d1ef4b1b78646dacf650dea3882b88d05b40fc7721405a3095135eab4bb"
	I0729 11:42:00.603570  142228 cri.go:89] found id: "0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53"
	I0729 11:42:00.603573  142228 cri.go:89] found id: "833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486"
	I0729 11:42:00.603578  142228 cri.go:89] found id: "2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756"
	I0729 11:42:00.603585  142228 cri.go:89] found id: "2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323"
	I0729 11:42:00.603590  142228 cri.go:89] found id: "24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0"
	I0729 11:42:00.603593  142228 cri.go:89] found id: "1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d"
	I0729 11:42:00.603600  142228 cri.go:89] found id: "0b984e1e87ad3ad4c6ab9defc9564db5b6d87774b023866f533b9f778be4f48d"
	I0729 11:42:00.603607  142228 cri.go:89] found id: ""
	I0729 11:42:00.603668  142228 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.677913037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253660677884449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=615ec7a2-364e-47a6-8ac2-be40ffbbd174 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.678400603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87343ae4-0c30-4e4a-b7d5-b347c257e526 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.678457229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87343ae4-0c30-4e4a-b7d5-b347c257e526 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.679017619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722253319760490986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name
: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253319590114552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722253319360482195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5f
fb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Ann
otations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722252826211018342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annot
ations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690309362743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690267316165,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722252678491071355,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252675058385510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252655364856326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252655267651801,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87343ae4-0c30-4e4a-b7d5-b347c257e526 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.720837865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a14d5556-eeb6-473d-b8bb-338bb5713cc2 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.720918027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a14d5556-eeb6-473d-b8bb-338bb5713cc2 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.721960524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3462218a-74fd-46e8-ba0b-e408da4c0289 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.722412962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253660722391038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3462218a-74fd-46e8-ba0b-e408da4c0289 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.722909762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6171d541-9079-4eb8-a575-5ede29c5bb2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.722979217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6171d541-9079-4eb8-a575-5ede29c5bb2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.723379303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722253319760490986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name
: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253319590114552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722253319360482195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5f
fb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Ann
otations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722252826211018342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annot
ations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690309362743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690267316165,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722252678491071355,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252675058385510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252655364856326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252655267651801,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6171d541-9079-4eb8-a575-5ede29c5bb2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.763477856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63816be8-6fcf-4ca0-a452-e22b1eef14e8 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.763565014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63816be8-6fcf-4ca0-a452-e22b1eef14e8 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.764564078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0359b644-5ea5-46ec-a146-d8bc0aae81e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.765067578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253660765041050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0359b644-5ea5-46ec-a146-d8bc0aae81e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.765596870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f56e0b30-4acd-4500-affe-139cc33c2ddb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.765758939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f56e0b30-4acd-4500-affe-139cc33c2ddb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.766212500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722253319760490986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name
: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253319590114552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722253319360482195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5f
fb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Ann
otations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722252826211018342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annot
ations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690309362743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690267316165,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722252678491071355,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252675058385510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252655364856326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252655267651801,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f56e0b30-4acd-4500-affe-139cc33c2ddb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.807346611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a5cfa2e-4216-4080-8912-f13f23144617 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.807431932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a5cfa2e-4216-4080-8912-f13f23144617 name=/runtime.v1.RuntimeService/Version
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.808602142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14d07e12-f35e-406b-868f-6d77ff1003ac name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.809108851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722253660809082305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14d07e12-f35e-406b-868f-6d77ff1003ac name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.809619963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc6666ab-9293-4a66-b005-5fdc1e779e3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.809742160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc6666ab-9293-4a66-b005-5fdc1e779e3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 11:47:40 ha-691698 crio[3887]: time="2024-07-29 11:47:40.810168266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:973ad904749b3bac9b05f8e71171231ae6361a24ead1f752e062f6279e91493e,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722253498553102228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722253363559198417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76745dad7b41c48929f36faf8ef63848b9b6cfd4a087a0fa1176ba5de5bdea70,PodSandboxId:50f119d8186f40739369e20530336e4a3cdd5817447844cabdc3ae1072d5d80f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722253352853383410,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annotations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722253351734412633,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a86f388a73255f4296d5a1c5912289fa84b6271f3cafd3e24cc4b0dda2f3554d,PodSandboxId:2ab993e81dcd50362030977dacfd8a791b23516b398ad194c81fd25447f64ce4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722253330700237365,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bba932b45fc610b002ddc98e5da80b5,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134,PodSandboxId:26c07c11033389d6604b9d783bb5d5162b233f945032367997d782ef1b9e5bd3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722253319760490986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 694c60e1-9d4e-4fea-96e6-21554bbf1aaa,},Annotations:map[string]string{io.kubernetes.container.hash: b7722330,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843,PodSandboxId:7a603ee93794e9172dec48067d3971c2b975748779f16725f61f391cb635a3b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320086621959,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11,PodSandboxId:db19f608bd022d02c46fc19a1f9415ba47dd011ce34d0466e74ec1a7fafadd52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722253320002066473,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.k
ubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46,PodSandboxId:e00235e1a109fea7897fb4cc15e55a8a04911b5211ffd4e79b5c2ce000217122,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722253319712946192,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name
: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97,PodSandboxId:2b7c38387340a6cac7d64f8c14f6d6966b2e77986ae96fa1720e606e5498e44f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722253319590114552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-691698,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b9e5f0877ca264a45eb8a7bf07a4ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c71bd6f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566,PodSandboxId:e671b1b6a37b90a609834ca1b97cba7904e9b09314c9290f8cde760c1cc7187f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722253319420785934,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4,PodSandboxId:656cbf9360b236dedf3f0878a50472b6fe24ae4e18c0205abe51d93f12779358,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722253319360482195,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3049f42a07ecb14cd8bfdb4d5cfad196,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07,PodSandboxId:bcab417350922782b0295673049bbf8cdc00112ddcd42c10a5946a78131fb6ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722253319378830721,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5f
fb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397,PodSandboxId:4c892374c85fc968454d6969d59a211e44d0bd9788309eae943b9cbc4154e8db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722253319397248692,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Ann
otations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:238fb47cd6e363ef0e2dbf575f8ae9e7bb031676dbf646a8b15dbb6fb317f02b,PodSandboxId:764f56dfda80f39ea85178454bdce7758a0a16d771e3263512a1499452c804da,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722252826211018342,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t69zw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ba70f798-7f59-4cd9-955c-82ce880ebcf9,},Annot
ations:map[string]string{io.kubernetes.container.hash: bd2a3e2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53,PodSandboxId:d32f436d019c4e796de3081dc4b72baea3c5b9a1838331ab255b6bbfb8ca2b72,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690309362743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-r48d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0329d8-26c1-49e5-8af9-8ecda56993ca,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2f42a3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486,PodSandboxId:8d892f55e419c5e8e29500c7899fab5941dfd55faf963b8ca8310ae17ea7e41b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722252690267316165,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p7zbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b85aaa0-2ae6-4883-b4e1-8e8af1eea933,},Annotations:map[string]string{io.kubernetes.container.hash: cd6d0062,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756,PodSandboxId:ff04fbe0e70400bb4ff924c1605d2561e183ec590bf1716db1f156b4ff929868,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722252678491071355,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gl972,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: caf4ea26-7d7a-419f-9493-67639c78ed1d,},Annotations:map[string]string{io.kubernetes.container.hash: f36228b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323,PodSandboxId:7978ad5ef51fb40b6504cf7dcc56453a2f5febdfc77d28e8dc88928912bf7f49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722252675058385510,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5hn2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c788f-9f8d-421e-b967-89b9154ea946,},Annotations:map[string]string{io.kubernetes.container.hash: 59f75994,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0,PodSandboxId:f7a6dae3abd7e06337b1180b8e28580ed18b58a01a961b0abde1469655ff1283,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722252655364856326,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bb5ffb5c77b0a888651c9baeb69857d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d,PodSandboxId:476f4c4be958126def7f8e5bd82475f498d2f8155f244578b2620a7a1241a680,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722252655267651801,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-691698,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e090ac15413f491114ca03adef34911,},Annotations:map[string]string{io.kubernetes.container.hash: 3238c900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc6666ab-9293-4a66-b005-5fdc1e779e3c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	973ad904749b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   26c07c1103338       storage-provisioner
	8cdc756e57258       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   656cbf9360b23       kube-controller-manager-ha-691698
	76745dad7b41c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   50f119d8186f4       busybox-fc5497c4f-t69zw
	9fb1ff299a498       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Running             kube-apiserver            3                   2b7c38387340a       kube-apiserver-ha-691698
	a86f388a73255       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   2ab993e81dcd5       kube-vip-ha-691698
	29581f41078e4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   7a603ee93794e       coredns-7db6d8ff4d-r48d8
	ccbd2ebd46e13       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   db19f608bd022       coredns-7db6d8ff4d-p7zbj
	2d706a5426fe1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   26c07c1103338       storage-provisioner
	51064326e4ef3       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   e00235e1a109f       kindnet-gl972
	05903437cede2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   2b7c38387340a       kube-apiserver-ha-691698
	e32dad0451680       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   e671b1b6a37b9       etcd-ha-691698
	f0c4593139567       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   4c892374c85fc       kube-proxy-5hn2s
	24e35a070016e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   bcab417350922       kube-scheduler-ha-691698
	5fb3e15e6fe5f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   656cbf9360b23       kube-controller-manager-ha-691698
	238fb47cd6e36       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   764f56dfda80f       busybox-fc5497c4f-t69zw
	0d819119d1f04       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   d32f436d019c4       coredns-7db6d8ff4d-r48d8
	833566290ab18       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   8d892f55e419c       coredns-7db6d8ff4d-p7zbj
	2c476db3ff154       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   ff04fbe0e7040       kindnet-gl972
	2da9ca3c5237b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   7978ad5ef51fb       kube-proxy-5hn2s
	24326f59696b1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   f7a6dae3abd7e       kube-scheduler-ha-691698
	1d0e28e4eb5d8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   476f4c4be9581       etcd-ha-691698
	
	
	==> coredns [0d819119d1f04e3e28db6b8fab5e0f9108a1455e7149eea12b04cc9f9c533f53] <==
	[INFO] 10.244.0.4:50254 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000059389s
	[INFO] 10.244.0.4:48812 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00188043s
	[INFO] 10.244.1.2:43643 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173662s
	[INFO] 10.244.1.2:52260 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003470125s
	[INFO] 10.244.1.2:54673 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136747s
	[INFO] 10.244.2.2:34318 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000273221s
	[INFO] 10.244.2.2:60262 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001476515s
	[INFO] 10.244.2.2:57052 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000142747s
	[INFO] 10.244.2.2:54120 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108997s
	[INFO] 10.244.1.2:44298 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081482s
	[INFO] 10.244.1.2:57785 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116033s
	[INFO] 10.244.2.2:38389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154869s
	[INFO] 10.244.2.2:33473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139061s
	[INFO] 10.244.2.2:36153 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064585s
	[INFO] 10.244.0.4:36379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097216s
	[INFO] 10.244.0.4:47834 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063726s
	[INFO] 10.244.1.2:33111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120166s
	[INFO] 10.244.2.2:43983 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122897s
	[INFO] 10.244.2.2:35012 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148813s
	[INFO] 10.244.2.2:40714 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011869s
	[INFO] 10.244.0.4:44215 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086794s
	[INFO] 10.244.0.4:38040 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005703s
	[INFO] 10.244.0.4:50677 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108307s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [29581f41078e4b77c5b410b62a82ac66324c9a97fb9c3a2afa8f901abe51d843] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41828->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41828->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41812->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[997064697]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 11:42:12.255) (total time: 12362ms):
	Trace[997064697]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41812->10.96.0.1:443: read: connection reset by peer 12361ms (11:42:24.617)
	Trace[997064697]: [12.362009663s] [12.362009663s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41812->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[384103912]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 11:42:34.062) (total time: 10000ms):
	Trace[384103912]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:42:44.062)
	Trace[384103912]: [10.000911224s] [10.000911224s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [833566290ab1898b5a7344acac875f14b677da0a915bba90e9f0d62eb59af486] <==
	[INFO] 10.244.2.2:34056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219216s
	[INFO] 10.244.2.2:60410 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161507s
	[INFO] 10.244.0.4:59522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147092s
	[INFO] 10.244.0.4:33605 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742361s
	[INFO] 10.244.0.4:54567 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076754s
	[INFO] 10.244.0.4:35616 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072926s
	[INFO] 10.244.0.4:50762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001270357s
	[INFO] 10.244.0.4:56719 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059193s
	[INFO] 10.244.0.4:42114 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124091s
	[INFO] 10.244.0.4:54680 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047725s
	[INFO] 10.244.1.2:33443 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111093s
	[INFO] 10.244.1.2:60576 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102839s
	[INFO] 10.244.2.2:47142 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084964s
	[INFO] 10.244.0.4:35741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015832s
	[INFO] 10.244.0.4:39817 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103529s
	[INFO] 10.244.1.2:45931 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134869s
	[INFO] 10.244.1.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000217632s
	[INFO] 10.244.1.2:59273 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000107311s
	[INFO] 10.244.2.2:49049 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000205027s
	[INFO] 10.244.0.4:42280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127437s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [ccbd2ebd46e1377f97c3dacd70ee764d146de361f3d8e168bacbf9310eb82b11] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[165632698]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 11:42:33.434) (total time: 10001ms):
	Trace[165632698]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:42:43.435)
	Trace[165632698]: [10.001285626s] [10.001285626s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-691698
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_31_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:30:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:47:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:30:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:42:47 +0000   Mon, 29 Jul 2024 11:31:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-691698
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ffcbde1a62f4ed28ef2171c0da37339
	  System UUID:                8ffcbde1-a62f-4ed2-8ef2-171c0da37339
	  Boot ID:                    f8eb0442-fda7-4803-ab40-821f5c33cb8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t69zw              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-p7zbj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-r48d8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-691698                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-gl972                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-691698             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-691698    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-5hn2s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-691698             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-691698                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 16m    kube-proxy       
	  Normal   Starting                 4m56s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-691698 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-691698 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-691698 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   NodeReady                16m    kubelet          Node ha-691698 status is now: NodeReady
	  Normal   RegisteredNode           15m    node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   RegisteredNode           14m    node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Warning  ContainerGCFailed        6m40s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m42s  node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   RegisteredNode           4m42s  node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	  Normal   RegisteredNode           3m8s   node-controller  Node ha-691698 event: Registered Node ha-691698 in Controller
	
	
	Name:               ha-691698-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_32_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:32:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:47:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:46:01 +0000   Mon, 29 Jul 2024 11:46:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:46:01 +0000   Mon, 29 Jul 2024 11:46:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:46:01 +0000   Mon, 29 Jul 2024 11:46:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:46:01 +0000   Mon, 29 Jul 2024 11:46:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    ha-691698-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c019d6e64b644eff86b333652cd5328b
	  System UUID:                c019d6e6-4b64-4eff-86b3-33652cd5328b
	  Boot ID:                    8d642b6f-d885-4b47-8890-605208e38eb4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22qb4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-691698-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-wrx27                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-691698-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-691698-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-8p4nc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-691698-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-691698-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-691698-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-691698-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-691698-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-691698-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node ha-691698-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node ha-691698-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-691698-m02 event: Registered Node ha-691698-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-691698-m02 status is now: NodeNotReady
	
	
	Name:               ha-691698-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-691698-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=ha-691698
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T11_34_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:34:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-691698-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:45:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 11:44:53 +0000   Mon, 29 Jul 2024 11:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-691698-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 acedffa77bf44161b125b5360bc5ba83
	  System UUID:                acedffa7-7bf4-4161-b125-b5360bc5ba83
	  Boot ID:                    476c2a79-4d31-467a-9808-931b9ef2342d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bgm87    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-pknpn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-9k2mb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-691698-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-691698-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-691698-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-691698-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m42s                  node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   RegisteredNode           4m42s                  node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   RegisteredNode           3m8s                   node-controller  Node ha-691698-m04 event: Registered Node ha-691698-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-691698-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-691698-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-691698-m04 has been rebooted, boot id: 476c2a79-4d31-467a-9808-931b9ef2342d
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-691698-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 4m2s)    node-controller  Node ha-691698-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.170622] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.056672] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055838] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.156855] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.147139] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.275583] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.097124] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[  +4.229544] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.063086] kauditd_printk_skb: 158 callbacks suppressed
	[Jul29 11:31] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.086846] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.595904] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.192166] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 11:32] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 11:38] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 11:41] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +0.145977] systemd-fstab-generator[3816]: Ignoring "noauto" option for root device
	[  +0.181688] systemd-fstab-generator[3830]: Ignoring "noauto" option for root device
	[  +0.146077] systemd-fstab-generator[3842]: Ignoring "noauto" option for root device
	[  +0.288873] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[  +0.804581] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	[Jul29 11:42] kauditd_printk_skb: 225 callbacks suppressed
	[ +19.100063] kauditd_printk_skb: 1 callbacks suppressed
	[ +21.464801] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [1d0e28e4eb5d8ebd86795f2d07c2df408c35ec5091b72e8f342541de0ebf724d] <==
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/29 11:40:25 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T11:40:25.71501Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"38b93d7e943acb5d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T11:40:25.71517Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715204Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715234Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715331Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715364Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715396Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715422Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"63612ca7ef791158"}
	{"level":"info","ts":"2024-07-29T11:40:25.715429Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715439Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715453Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715522Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715611Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715721Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.715758Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:40:25.718181Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.244:2380"}
	{"level":"info","ts":"2024-07-29T11:40:25.718318Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.244:2380"}
	{"level":"info","ts":"2024-07-29T11:40:25.718358Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-691698","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.244:2380"],"advertise-client-urls":["https://192.168.39.244:2379"]}
	
	
	==> etcd [e32dad045168073c23b490fb0ba4275606d652ce324f589cb32e69ff94513566] <==
	{"level":"info","ts":"2024-07-29T11:44:17.156613Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38b93d7e943acb5d","to":"239f4a9a4c2b2b5d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T11:44:17.156904Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:44:17.183774Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38b93d7e943acb5d","to":"239f4a9a4c2b2b5d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T11:44:17.18402Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:44:23.178046Z","caller":"traceutil/trace.go:171","msg":"trace[83441966] transaction","detail":"{read_only:false; response_revision:2495; number_of_response:1; }","duration":"119.457875ms","start":"2024-07-29T11:44:23.05857Z","end":"2024-07-29T11:44:23.178028Z","steps":["trace[83441966] 'process raft request'  (duration: 119.232966ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T11:45:00.698109Z","caller":"traceutil/trace.go:171","msg":"trace[1338971810] transaction","detail":"{read_only:false; response_revision:2613; number_of_response:1; }","duration":"109.374913ms","start":"2024-07-29T11:45:00.588506Z","end":"2024-07-29T11:45:00.697881Z","steps":["trace[1338971810] 'process raft request'  (duration: 109.234588ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T11:45:07.053853Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.23:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T11:45:07.074691Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.23:36282","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-29T11:45:07.085949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38b93d7e943acb5d switched to configuration voters=(4087365750677490525 7161053982284648792)"}
	{"level":"info","ts":"2024-07-29T11:45:07.088058Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"ae521d247b31ac74","local-member-id":"38b93d7e943acb5d","removed-remote-peer-id":"239f4a9a4c2b2b5d","removed-remote-peer-urls":["https://192.168.39.23:2380"]}
	{"level":"info","ts":"2024-07-29T11:45:07.088111Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"warn","ts":"2024-07-29T11:45:07.088261Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:45:07.088303Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"warn","ts":"2024-07-29T11:45:07.088546Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:45:07.088593Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:45:07.088706Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"warn","ts":"2024-07-29T11:45:07.088886Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T11:45:07.089002Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"239f4a9a4c2b2b5d","error":"failed to read 239f4a9a4c2b2b5d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T11:45:07.089046Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"warn","ts":"2024-07-29T11:45:07.089423Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T11:45:07.089468Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38b93d7e943acb5d","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:45:07.089503Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"info","ts":"2024-07-29T11:45:07.089519Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"38b93d7e943acb5d","removed-remote-peer-id":"239f4a9a4c2b2b5d"}
	{"level":"warn","ts":"2024-07-29T11:45:07.101355Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"38b93d7e943acb5d","remote-peer-id-stream-handler":"38b93d7e943acb5d","remote-peer-id-from":"239f4a9a4c2b2b5d"}
	{"level":"warn","ts":"2024-07-29T11:45:07.107439Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.23:55764","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:47:41 up 17 min,  0 users,  load average: 0.23, 0.20, 0.18
	Linux ha-691698 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c476db3ff154a17eb93ab79d37425623ba0bd538b3f346f3cdcc119f61f1756] <==
	I0729 11:39:49.465783       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:39:59.465811       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:39:59.465856       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:39:59.465995       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:39:59.466015       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:39:59.466065       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:39:59.466081       1 main.go:299] handling current node
	I0729 11:39:59.466093       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:39:59.466098       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:40:09.457348       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:40:09.457395       1 main.go:299] handling current node
	I0729 11:40:09.457415       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:40:09.457420       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:40:09.457559       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:40:09.457580       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:40:09.457633       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:40:09.457638       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:40:19.457732       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:40:19.457835       1 main.go:299] handling current node
	I0729 11:40:19.457864       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:40:19.457904       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:40:19.458040       1 main.go:295] Handling node with IPs: map[192.168.39.23:{}]
	I0729 11:40:19.458141       1 main.go:322] Node ha-691698-m03 has CIDR [10.244.2.0/24] 
	I0729 11:40:19.458256       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:40:19.458310       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [51064326e4ef378463852516d737d73011a98ed07f2acdaccf22ad4bf941be46] <==
	I0729 11:47:00.678290       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:47:10.682487       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:47:10.682585       1 main.go:299] handling current node
	I0729 11:47:10.682627       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:47:10.682645       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:47:10.682825       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:47:10.682857       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:47:20.679791       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:47:20.679888       1 main.go:299] handling current node
	I0729 11:47:20.679919       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:47:20.679937       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:47:20.680062       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:47:20.680081       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:47:30.678791       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:47:30.678933       1 main.go:299] handling current node
	I0729 11:47:30.678960       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:47:30.678978       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	I0729 11:47:30.679119       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:47:30.679139       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:47:40.676993       1 main.go:295] Handling node with IPs: map[192.168.39.84:{}]
	I0729 11:47:40.677045       1 main.go:322] Node ha-691698-m04 has CIDR [10.244.3.0/24] 
	I0729 11:47:40.677259       1 main.go:295] Handling node with IPs: map[192.168.39.244:{}]
	I0729 11:47:40.677297       1 main.go:299] handling current node
	I0729 11:47:40.677315       1 main.go:295] Handling node with IPs: map[192.168.39.5:{}]
	I0729 11:47:40.677323       1 main.go:322] Node ha-691698-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [05903437cede24841c12e3528eca50aacca702174d5674c4694e77480051fc97] <==
	I0729 11:42:00.584516       1 options.go:221] external host was not specified, using 192.168.39.244
	I0729 11:42:00.612118       1 server.go:148] Version: v1.30.3
	I0729 11:42:00.612193       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:42:01.087725       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 11:42:01.100736       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 11:42:01.107341       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 11:42:01.109708       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 11:42:01.109972       1 instance.go:299] Using reconciler: lease
	W0729 11:42:21.083928       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 11:42:21.083972       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 11:42:21.111562       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9fb1ff299a498b985d77ca9503897a1f50bccd5168d3155c55a706e62986230f] <==
	I0729 11:42:46.281874       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 11:42:46.400896       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 11:42:46.401018       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 11:42:46.401122       1 aggregator.go:165] initial CRD sync complete...
	I0729 11:42:46.401154       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 11:42:46.401161       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 11:42:46.402402       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 11:42:46.402488       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 11:42:46.434646       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 11:42:46.438506       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 11:42:46.438533       1 policy_source.go:224] refreshing policies
	I0729 11:42:46.476750       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 11:42:46.476883       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:42:46.477293       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:42:46.485234       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0729 11:42:46.495387       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23]
	I0729 11:42:46.496929       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:42:46.503392       1 cache.go:39] Caches are synced for autoregister controller
	I0729 11:42:46.508850       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:42:46.512874       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0729 11:42:46.513738       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 11:42:47.284092       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 11:42:48.037720       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.244]
	W0729 11:43:08.040330       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244 192.168.39.5]
	W0729 11:45:18.048346       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.244 192.168.39.5]
	
	
	==> kube-controller-manager [5fb3e15e6fe5f14a206b948a13cf85693e19cec32f336f85024559f542522af4] <==
	I0729 11:42:01.063154       1 serving.go:380] Generated self-signed cert in-memory
	I0729 11:42:01.383871       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 11:42:01.383910       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:42:01.387545       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 11:42:01.387843       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 11:42:01.388100       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 11:42:01.388239       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 11:42:22.125173       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.244:8443/healthz\": dial tcp 192.168.39.244:8443: connect: connection refused"
	
	
	==> kube-controller-manager [8cdc756e57258c28b832d79ce01adca1bd5873b5d76b82e532a622f4e38a232e] <==
	I0729 11:45:54.757070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.988404ms"
	I0729 11:45:54.757148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.174µs"
	E0729 11:45:59.345356       1 gc_controller.go:153] "Failed to get node" err="node \"ha-691698-m03\" not found" logger="pod-garbage-collector-controller" node="ha-691698-m03"
	E0729 11:45:59.345468       1 gc_controller.go:153] "Failed to get node" err="node \"ha-691698-m03\" not found" logger="pod-garbage-collector-controller" node="ha-691698-m03"
	E0729 11:45:59.345504       1 gc_controller.go:153] "Failed to get node" err="node \"ha-691698-m03\" not found" logger="pod-garbage-collector-controller" node="ha-691698-m03"
	E0729 11:45:59.345536       1 gc_controller.go:153] "Failed to get node" err="node \"ha-691698-m03\" not found" logger="pod-garbage-collector-controller" node="ha-691698-m03"
	E0729 11:45:59.345566       1 gc_controller.go:153] "Failed to get node" err="node \"ha-691698-m03\" not found" logger="pod-garbage-collector-controller" node="ha-691698-m03"
	I0729 11:45:59.358635       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-691698-m03"
	I0729 11:45:59.387728       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-691698-m03"
	I0729 11:45:59.387854       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-691698-m03"
	I0729 11:45:59.416921       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-691698-m03"
	I0729 11:45:59.417291       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vd69n"
	I0729 11:45:59.443829       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vd69n"
	I0729 11:45:59.443960       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-691698-m03"
	I0729 11:45:59.473401       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-691698-m03"
	I0729 11:45:59.474519       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-691698-m03"
	I0729 11:45:59.509354       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-691698-m03"
	I0729 11:45:59.509458       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-n929l"
	I0729 11:45:59.539924       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-n929l"
	I0729 11:45:59.540062       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-691698-m03"
	I0729 11:45:59.574369       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-691698-m03"
	I0729 11:45:59.901179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.058057ms"
	I0729 11:45:59.901480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.382µs"
	I0729 11:46:04.663858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.735676ms"
	I0729 11:46:04.664558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.541µs"
	
	
	==> kube-proxy [2da9ca3c5237b0d7c1da30c6bfddfe0acc1aa1cdf4299778f0e76aae8b09b323] <==
	E0729 11:39:05.961411       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:09.033003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:09.033084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:09.033134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:09.033150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:09.033284       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:09.033323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:15.497092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:15.497152       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:15.497225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:15.497282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:15.497342       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:15.497373       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:24.715323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:24.715454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:27.786388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:27.786472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:27.786535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:27.786623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:49.289201       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:49.289291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1902": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:49.289363       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:49.289378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:39:52.362271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:39:52.362324       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [f0c459313956744b95f043aa284816fcdc27f9fe1c44581e4c36e4442f669397] <==
	E0729 11:42:25.961946       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-691698\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 11:42:44.394296       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-691698\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 11:42:44.394491       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0729 11:42:44.437047       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:42:44.437152       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:42:44.437183       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:42:44.439807       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:42:44.440030       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:42:44.440186       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:42:44.441202       1 config.go:192] "Starting service config controller"
	I0729 11:42:44.441291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:42:44.441345       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:42:44.441362       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:42:44.442163       1 config.go:319] "Starting node config controller"
	I0729 11:42:44.442204       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0729 11:42:47.465040       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0729 11:42:47.465846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:42:47.469903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-691698&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:42:47.465975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:42:47.469966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 11:42:47.466041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 11:42:47.470059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0729 11:42:48.641809       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:42:48.642048       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:42:49.042265       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [24326f59696b17b15ba696a19c689e38c4b1fd710b542620d7e45fb94eb466a0] <==
	W0729 11:40:23.602438       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:40:23.602469       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:40:23.621838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:40:23.621882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:40:23.759379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:23.759424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:40:23.768820       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:40:23.769024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:40:23.920537       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:40:23.920648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:40:24.119334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:40:24.119403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:40:24.181831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:40:24.181895       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:40:24.213357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:24.213436       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:40:24.367322       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:40:24.367430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:40:24.380951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:24.380998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:40:24.408611       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:40:24.408737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:40:24.445167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:24.445291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:40:25.611135       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [24e35a070016ef6a857927589ffd85ca20169c125193808d42a4b201dc4bbd07] <==
	W0729 11:42:46.373744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:42:46.373773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:42:46.373837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:42:46.373862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:42:46.373903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:42:46.373927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:42:46.373968       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:42:46.373993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:42:46.374023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:42:46.374047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:42:46.374101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:42:46.374125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:42:46.374166       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:42:46.374189       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:42:46.374221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:42:46.374246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 11:42:46.374288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:42:46.374313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:42:46.374349       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:42:46.374370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 11:42:46.374426       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:42:46.374449       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:42:46.401643       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:42:46.404765       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 11:42:59.741945       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 11:44:07 ha-691698 kubelet[1382]: I0729 11:44:07.544339    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:07 ha-691698 kubelet[1382]: E0729 11:44:07.544655    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:20 ha-691698 kubelet[1382]: I0729 11:44:20.544320    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:20 ha-691698 kubelet[1382]: E0729 11:44:20.544875    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:35 ha-691698 kubelet[1382]: I0729 11:44:35.544146    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:35 ha-691698 kubelet[1382]: E0729 11:44:35.544557    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:47 ha-691698 kubelet[1382]: I0729 11:44:47.545944    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:47 ha-691698 kubelet[1382]: E0729 11:44:47.548932    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(694c60e1-9d4e-4fea-96e6-21554bbf1aaa)\"" pod="kube-system/storage-provisioner" podUID="694c60e1-9d4e-4fea-96e6-21554bbf1aaa"
	Jul 29 11:44:58 ha-691698 kubelet[1382]: I0729 11:44:58.544271    1382 scope.go:117] "RemoveContainer" containerID="2d706a5426fe12e7de407062f8498193fe7c821e92e19a56d24bfbdb11308134"
	Jul 29 11:44:58 ha-691698 kubelet[1382]: I0729 11:44:58.673476    1382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-691698" podStartSLOduration=93.673451886 podStartE2EDuration="1m33.673451886s" podCreationTimestamp="2024-07-29 11:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 11:43:35.915174377 +0000 UTC m=+754.529375109" watchObservedRunningTime="2024-07-29 11:44:58.673451886 +0000 UTC m=+837.287652619"
	Jul 29 11:45:01 ha-691698 kubelet[1382]: E0729 11:45:01.568181    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:45:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:45:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:45:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:45:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:46:01 ha-691698 kubelet[1382]: E0729 11:46:01.570825    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:46:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:46:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:46:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:46:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 11:47:01 ha-691698 kubelet[1382]: E0729 11:47:01.566548    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 11:47:01 ha-691698 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 11:47:01 ha-691698 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 11:47:01 ha-691698 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 11:47:01 ha-691698 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:47:40.387411  144594 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19336-113730/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-691698 -n ha-691698
helpers_test.go:261: (dbg) Run:  kubectl --context ha-691698 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-293807
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-293807
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-293807: exit status 82 (2m1.789057095s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-293807-m03"  ...
	* Stopping node "multinode-293807-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-293807" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-293807 --wait=true -v=8 --alsologtostderr
E0729 12:04:27.395776  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 12:07:30.441437  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-293807 --wait=true -v=8 --alsologtostderr: (3m20.311503309s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-293807
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-293807 -n multinode-293807
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-293807 logs -n 25: (1.45026465s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1050760835/001/cp-test_multinode-293807-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807:/home/docker/cp-test_multinode-293807-m02_multinode-293807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807 sudo cat                                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m02_multinode-293807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03:/home/docker/cp-test_multinode-293807-m02_multinode-293807-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807-m03 sudo cat                                   | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m02_multinode-293807-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp testdata/cp-test.txt                                                | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1050760835/001/cp-test_multinode-293807-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807:/home/docker/cp-test_multinode-293807-m03_multinode-293807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807 sudo cat                                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m03_multinode-293807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02:/home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807-m02 sudo cat                                   | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-293807 node stop m03                                                          | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	| node    | multinode-293807 node start                                                             | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-293807                                                                | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	| stop    | -p multinode-293807                                                                     | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	| start   | -p multinode-293807                                                                     | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:04 UTC | 29 Jul 24 12:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-293807                                                                | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:04:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:04:17.960649  153921 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:04:17.960974  153921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:04:17.960984  153921 out.go:304] Setting ErrFile to fd 2...
	I0729 12:04:17.960988  153921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:04:17.961171  153921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:04:17.961715  153921 out.go:298] Setting JSON to false
	I0729 12:04:17.962625  153921 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6409,"bootTime":1722248249,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:04:17.962686  153921 start.go:139] virtualization: kvm guest
	I0729 12:04:17.964952  153921 out.go:177] * [multinode-293807] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:04:17.966421  153921 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:04:17.966428  153921 notify.go:220] Checking for updates...
	I0729 12:04:17.968825  153921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:04:17.970225  153921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:04:17.971453  153921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:04:17.972861  153921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:04:17.974337  153921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:04:17.976412  153921 config.go:182] Loaded profile config "multinode-293807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:17.976548  153921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:04:17.977227  153921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:17.977307  153921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:17.993156  153921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42233
	I0729 12:04:17.993671  153921 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:17.994319  153921 main.go:141] libmachine: Using API Version  1
	I0729 12:04:17.994341  153921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:17.994808  153921 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:17.995069  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:04:18.032159  153921 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:04:18.033503  153921 start.go:297] selected driver: kvm2
	I0729 12:04:18.033519  153921 start.go:901] validating driver "kvm2" against &{Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:04:18.033674  153921 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:04:18.034014  153921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:04:18.034099  153921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:04:18.050206  153921 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:04:18.050922  153921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:04:18.050969  153921 cni.go:84] Creating CNI manager for ""
	I0729 12:04:18.050977  153921 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 12:04:18.051044  153921 start.go:340] cluster config:
	{Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-293807 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:04:18.051227  153921 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:04:18.053922  153921 out.go:177] * Starting "multinode-293807" primary control-plane node in "multinode-293807" cluster
	I0729 12:04:18.055197  153921 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:04:18.055242  153921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:04:18.055256  153921 cache.go:56] Caching tarball of preloaded images
	I0729 12:04:18.055345  153921 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:04:18.055358  153921 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:04:18.055517  153921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/config.json ...
	I0729 12:04:18.055741  153921 start.go:360] acquireMachinesLock for multinode-293807: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:04:18.055808  153921 start.go:364] duration metric: took 44.837µs to acquireMachinesLock for "multinode-293807"
	I0729 12:04:18.055828  153921 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:04:18.055837  153921 fix.go:54] fixHost starting: 
	I0729 12:04:18.056104  153921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:18.056144  153921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:18.071967  153921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0729 12:04:18.072406  153921 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:18.072924  153921 main.go:141] libmachine: Using API Version  1
	I0729 12:04:18.072945  153921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:18.073318  153921 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:18.073537  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:04:18.073682  153921 main.go:141] libmachine: (multinode-293807) Calling .GetState
	I0729 12:04:18.075285  153921 fix.go:112] recreateIfNeeded on multinode-293807: state=Running err=<nil>
	W0729 12:04:18.075323  153921 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:04:18.078291  153921 out.go:177] * Updating the running kvm2 "multinode-293807" VM ...
	I0729 12:04:18.079843  153921 machine.go:94] provisionDockerMachine start ...
	I0729 12:04:18.079869  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:04:18.080121  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.082597  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.083019  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.083048  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.083236  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.083416  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.083564  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.083705  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.083870  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.084106  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.084120  153921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:04:18.197259  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-293807
	
	I0729 12:04:18.197294  153921 main.go:141] libmachine: (multinode-293807) Calling .GetMachineName
	I0729 12:04:18.197558  153921 buildroot.go:166] provisioning hostname "multinode-293807"
	I0729 12:04:18.197584  153921 main.go:141] libmachine: (multinode-293807) Calling .GetMachineName
	I0729 12:04:18.197776  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.200421  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.200818  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.200846  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.200984  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.201183  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.201338  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.201455  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.201601  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.201757  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.201769  153921 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-293807 && echo "multinode-293807" | sudo tee /etc/hostname
	I0729 12:04:18.327942  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-293807
	
	I0729 12:04:18.327987  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.330830  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.331205  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.331243  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.331407  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.331620  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.331798  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.331928  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.332082  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.332262  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.332280  153921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-293807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-293807/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-293807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:04:18.449962  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:04:18.450000  153921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 12:04:18.450043  153921 buildroot.go:174] setting up certificates
	I0729 12:04:18.450056  153921 provision.go:84] configureAuth start
	I0729 12:04:18.450072  153921 main.go:141] libmachine: (multinode-293807) Calling .GetMachineName
	I0729 12:04:18.450362  153921 main.go:141] libmachine: (multinode-293807) Calling .GetIP
	I0729 12:04:18.452937  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.453320  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.453357  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.453535  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.455599  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.455938  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.455966  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.456100  153921 provision.go:143] copyHostCerts
	I0729 12:04:18.456134  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:04:18.456170  153921 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 12:04:18.456179  153921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:04:18.456244  153921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 12:04:18.456337  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:04:18.456355  153921 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 12:04:18.456359  153921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:04:18.456382  153921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 12:04:18.456435  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:04:18.456451  153921 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 12:04:18.456457  153921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:04:18.456476  153921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 12:04:18.456530  153921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.multinode-293807 san=[127.0.0.1 192.168.39.26 localhost minikube multinode-293807]
	I0729 12:04:18.709672  153921 provision.go:177] copyRemoteCerts
	I0729 12:04:18.709745  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:04:18.709771  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.712443  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.712908  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.712943  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.713156  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.713383  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.713584  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.713719  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:04:18.808642  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:04:18.808727  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:04:18.834262  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:04:18.834348  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 12:04:18.858180  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:04:18.858255  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:04:18.883028  153921 provision.go:87] duration metric: took 432.954887ms to configureAuth
	I0729 12:04:18.883060  153921 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:04:18.883283  153921 config.go:182] Loaded profile config "multinode-293807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:18.883370  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.886200  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.886585  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.886607  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.886824  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.887038  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.887205  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.887320  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.887474  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.887662  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.887683  153921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:05:49.765830  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:05:49.765873  153921 machine.go:97] duration metric: took 1m31.68601228s to provisionDockerMachine
	I0729 12:05:49.765887  153921 start.go:293] postStartSetup for "multinode-293807" (driver="kvm2")
	I0729 12:05:49.765899  153921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:05:49.765926  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:49.766248  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:05:49.766282  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:49.769552  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.769968  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:49.770010  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.770171  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:49.770398  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:49.770569  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:49.770677  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:05:49.855885  153921 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:05:49.860097  153921 command_runner.go:130] > NAME=Buildroot
	I0729 12:05:49.860120  153921 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 12:05:49.860126  153921 command_runner.go:130] > ID=buildroot
	I0729 12:05:49.860133  153921 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 12:05:49.860140  153921 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 12:05:49.860191  153921 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:05:49.860209  153921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 12:05:49.860283  153921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 12:05:49.860352  153921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 12:05:49.860362  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 12:05:49.860454  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:05:49.870135  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:05:49.894511  153921 start.go:296] duration metric: took 128.605695ms for postStartSetup
	I0729 12:05:49.894560  153921 fix.go:56] duration metric: took 1m31.838725321s for fixHost
	I0729 12:05:49.894582  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:49.897333  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.897761  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:49.897798  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.897934  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:49.898169  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:49.898310  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:49.898441  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:49.898597  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:05:49.898833  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:05:49.898848  153921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:05:50.013753  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254749.986012519
	
	I0729 12:05:50.013782  153921 fix.go:216] guest clock: 1722254749.986012519
	I0729 12:05:50.013801  153921 fix.go:229] Guest: 2024-07-29 12:05:49.986012519 +0000 UTC Remote: 2024-07-29 12:05:49.894564673 +0000 UTC m=+91.974195898 (delta=91.447846ms)
	I0729 12:05:50.013832  153921 fix.go:200] guest clock delta is within tolerance: 91.447846ms
	I0729 12:05:50.013841  153921 start.go:83] releasing machines lock for "multinode-293807", held for 1m31.958021336s
	I0729 12:05:50.013868  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.014199  153921 main.go:141] libmachine: (multinode-293807) Calling .GetIP
	I0729 12:05:50.016936  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.017307  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:50.017424  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.017518  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.018043  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.018216  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.018287  153921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:05:50.018350  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:50.018454  153921 ssh_runner.go:195] Run: cat /version.json
	I0729 12:05:50.018481  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:50.021266  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021294  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021696  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:50.021724  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021751  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:50.021763  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021892  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:50.022001  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:50.022090  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:50.022142  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:50.022200  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:50.022241  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:50.022300  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:05:50.022348  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:05:50.101398  153921 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 12:05:50.101693  153921 ssh_runner.go:195] Run: systemctl --version
	I0729 12:05:50.123455  153921 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 12:05:50.123521  153921 command_runner.go:130] > systemd 252 (252)
	I0729 12:05:50.123549  153921 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 12:05:50.123623  153921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:05:50.278878  153921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 12:05:50.284438  153921 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 12:05:50.284528  153921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:05:50.284592  153921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:05:50.294044  153921 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:05:50.294072  153921 start.go:495] detecting cgroup driver to use...
	I0729 12:05:50.294151  153921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:05:50.310921  153921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:05:50.325490  153921 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:05:50.325573  153921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:05:50.339841  153921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:05:50.353862  153921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:05:50.496896  153921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:05:50.645673  153921 docker.go:233] disabling docker service ...
	I0729 12:05:50.645743  153921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:05:50.662403  153921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:05:50.675964  153921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:05:50.820424  153921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:05:50.966315  153921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:05:50.980932  153921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:05:50.999447  153921 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 12:05:50.999709  153921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:05:50.999778  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.010609  153921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:05:51.010680  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.021508  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.032512  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.043455  153921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:05:51.054872  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.065635  153921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.077007  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.087502  153921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:05:51.097004  153921 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 12:05:51.097103  153921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:05:51.107066  153921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:05:51.244296  153921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:05:51.781015  153921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:05:51.781107  153921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:05:51.785682  153921 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 12:05:51.785708  153921 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 12:05:51.785715  153921 command_runner.go:130] > Device: 0,22	Inode: 1317        Links: 1
	I0729 12:05:51.785721  153921 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 12:05:51.785726  153921 command_runner.go:130] > Access: 2024-07-29 12:05:51.648880032 +0000
	I0729 12:05:51.785732  153921 command_runner.go:130] > Modify: 2024-07-29 12:05:51.648880032 +0000
	I0729 12:05:51.785738  153921 command_runner.go:130] > Change: 2024-07-29 12:05:51.648880032 +0000
	I0729 12:05:51.785743  153921 command_runner.go:130] >  Birth: -
	I0729 12:05:51.785768  153921 start.go:563] Will wait 60s for crictl version
	I0729 12:05:51.785836  153921 ssh_runner.go:195] Run: which crictl
	I0729 12:05:51.789472  153921 command_runner.go:130] > /usr/bin/crictl
	I0729 12:05:51.789585  153921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:05:51.823263  153921 command_runner.go:130] > Version:  0.1.0
	I0729 12:05:51.823291  153921 command_runner.go:130] > RuntimeName:  cri-o
	I0729 12:05:51.823296  153921 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 12:05:51.823302  153921 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 12:05:51.824201  153921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:05:51.824274  153921 ssh_runner.go:195] Run: crio --version
	I0729 12:05:51.851865  153921 command_runner.go:130] > crio version 1.29.1
	I0729 12:05:51.851888  153921 command_runner.go:130] > Version:        1.29.1
	I0729 12:05:51.851894  153921 command_runner.go:130] > GitCommit:      unknown
	I0729 12:05:51.851906  153921 command_runner.go:130] > GitCommitDate:  unknown
	I0729 12:05:51.851913  153921 command_runner.go:130] > GitTreeState:   clean
	I0729 12:05:51.851922  153921 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 12:05:51.851929  153921 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 12:05:51.851935  153921 command_runner.go:130] > Compiler:       gc
	I0729 12:05:51.851940  153921 command_runner.go:130] > Platform:       linux/amd64
	I0729 12:05:51.851945  153921 command_runner.go:130] > Linkmode:       dynamic
	I0729 12:05:51.851949  153921 command_runner.go:130] > BuildTags:      
	I0729 12:05:51.851957  153921 command_runner.go:130] >   containers_image_ostree_stub
	I0729 12:05:51.851961  153921 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 12:05:51.851965  153921 command_runner.go:130] >   btrfs_noversion
	I0729 12:05:51.851970  153921 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 12:05:51.851976  153921 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 12:05:51.851981  153921 command_runner.go:130] >   seccomp
	I0729 12:05:51.851989  153921 command_runner.go:130] > LDFlags:          unknown
	I0729 12:05:51.851998  153921 command_runner.go:130] > SeccompEnabled:   true
	I0729 12:05:51.852008  153921 command_runner.go:130] > AppArmorEnabled:  false
	I0729 12:05:51.853277  153921 ssh_runner.go:195] Run: crio --version
	I0729 12:05:51.881454  153921 command_runner.go:130] > crio version 1.29.1
	I0729 12:05:51.881482  153921 command_runner.go:130] > Version:        1.29.1
	I0729 12:05:51.881491  153921 command_runner.go:130] > GitCommit:      unknown
	I0729 12:05:51.881496  153921 command_runner.go:130] > GitCommitDate:  unknown
	I0729 12:05:51.881501  153921 command_runner.go:130] > GitTreeState:   clean
	I0729 12:05:51.881507  153921 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 12:05:51.881520  153921 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 12:05:51.881524  153921 command_runner.go:130] > Compiler:       gc
	I0729 12:05:51.881528  153921 command_runner.go:130] > Platform:       linux/amd64
	I0729 12:05:51.881532  153921 command_runner.go:130] > Linkmode:       dynamic
	I0729 12:05:51.881537  153921 command_runner.go:130] > BuildTags:      
	I0729 12:05:51.881542  153921 command_runner.go:130] >   containers_image_ostree_stub
	I0729 12:05:51.881547  153921 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 12:05:51.881554  153921 command_runner.go:130] >   btrfs_noversion
	I0729 12:05:51.881561  153921 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 12:05:51.881569  153921 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 12:05:51.881579  153921 command_runner.go:130] >   seccomp
	I0729 12:05:51.881586  153921 command_runner.go:130] > LDFlags:          unknown
	I0729 12:05:51.881592  153921 command_runner.go:130] > SeccompEnabled:   true
	I0729 12:05:51.881598  153921 command_runner.go:130] > AppArmorEnabled:  false
	I0729 12:05:51.883736  153921 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:05:51.885218  153921 main.go:141] libmachine: (multinode-293807) Calling .GetIP
	I0729 12:05:51.887889  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:51.888257  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:51.888290  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:51.888487  153921 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:05:51.892592  153921 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 12:05:51.892836  153921 kubeadm.go:883] updating cluster {Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:05:51.893024  153921 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:05:51.893076  153921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:05:51.932948  153921 command_runner.go:130] > {
	I0729 12:05:51.932997  153921 command_runner.go:130] >   "images": [
	I0729 12:05:51.933003  153921 command_runner.go:130] >     {
	I0729 12:05:51.933017  153921 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 12:05:51.933024  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933034  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 12:05:51.933040  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933047  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933060  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 12:05:51.933075  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 12:05:51.933089  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933096  153921 command_runner.go:130] >       "size": "87165492",
	I0729 12:05:51.933106  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933113  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933125  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933133  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933139  153921 command_runner.go:130] >     },
	I0729 12:05:51.933146  153921 command_runner.go:130] >     {
	I0729 12:05:51.933155  153921 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 12:05:51.933161  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933171  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 12:05:51.933178  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933185  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933196  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 12:05:51.933210  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 12:05:51.933218  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933226  153921 command_runner.go:130] >       "size": "87174707",
	I0729 12:05:51.933235  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933244  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933252  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933257  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933263  153921 command_runner.go:130] >     },
	I0729 12:05:51.933271  153921 command_runner.go:130] >     {
	I0729 12:05:51.933280  153921 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 12:05:51.933288  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933295  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 12:05:51.933302  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933308  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933321  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 12:05:51.933333  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 12:05:51.933341  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933347  153921 command_runner.go:130] >       "size": "1363676",
	I0729 12:05:51.933355  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933361  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933366  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933374  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933384  153921 command_runner.go:130] >     },
	I0729 12:05:51.933391  153921 command_runner.go:130] >     {
	I0729 12:05:51.933400  153921 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 12:05:51.933409  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933417  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 12:05:51.933425  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933431  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933445  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 12:05:51.933463  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 12:05:51.933470  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933476  153921 command_runner.go:130] >       "size": "31470524",
	I0729 12:05:51.933484  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933491  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933500  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933506  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933513  153921 command_runner.go:130] >     },
	I0729 12:05:51.933519  153921 command_runner.go:130] >     {
	I0729 12:05:51.933530  153921 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 12:05:51.933539  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933548  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 12:05:51.933555  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933562  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933573  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 12:05:51.933588  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 12:05:51.933597  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933603  153921 command_runner.go:130] >       "size": "61245718",
	I0729 12:05:51.933612  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933619  153921 command_runner.go:130] >       "username": "nonroot",
	I0729 12:05:51.933628  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933633  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933642  153921 command_runner.go:130] >     },
	I0729 12:05:51.933647  153921 command_runner.go:130] >     {
	I0729 12:05:51.933661  153921 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 12:05:51.933667  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933686  153921 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 12:05:51.933694  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933702  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933714  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 12:05:51.933727  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 12:05:51.933736  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933746  153921 command_runner.go:130] >       "size": "150779692",
	I0729 12:05:51.933753  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.933761  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.933766  153921 command_runner.go:130] >       },
	I0729 12:05:51.933772  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933778  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933783  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933788  153921 command_runner.go:130] >     },
	I0729 12:05:51.933801  153921 command_runner.go:130] >     {
	I0729 12:05:51.933811  153921 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 12:05:51.933820  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933827  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 12:05:51.933836  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933842  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933853  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 12:05:51.933866  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 12:05:51.933880  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933886  153921 command_runner.go:130] >       "size": "117609954",
	I0729 12:05:51.933896  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.933902  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.933910  153921 command_runner.go:130] >       },
	I0729 12:05:51.933916  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933927  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933935  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933940  153921 command_runner.go:130] >     },
	I0729 12:05:51.933947  153921 command_runner.go:130] >     {
	I0729 12:05:51.933955  153921 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 12:05:51.933964  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933972  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 12:05:51.933980  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933986  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934010  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 12:05:51.934025  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 12:05:51.934030  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934036  153921 command_runner.go:130] >       "size": "112198984",
	I0729 12:05:51.934045  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.934050  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.934059  153921 command_runner.go:130] >       },
	I0729 12:05:51.934065  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934071  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934076  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.934081  153921 command_runner.go:130] >     },
	I0729 12:05:51.934085  153921 command_runner.go:130] >     {
	I0729 12:05:51.934094  153921 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 12:05:51.934099  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.934107  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 12:05:51.934112  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934117  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934128  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 12:05:51.934139  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 12:05:51.934150  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934155  153921 command_runner.go:130] >       "size": "85953945",
	I0729 12:05:51.934163  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.934169  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934175  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934182  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.934191  153921 command_runner.go:130] >     },
	I0729 12:05:51.934196  153921 command_runner.go:130] >     {
	I0729 12:05:51.934207  153921 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 12:05:51.934215  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.934226  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 12:05:51.934235  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934241  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934254  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 12:05:51.934266  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 12:05:51.934274  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934281  153921 command_runner.go:130] >       "size": "63051080",
	I0729 12:05:51.934289  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.934299  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.934306  153921 command_runner.go:130] >       },
	I0729 12:05:51.934312  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934321  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934327  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.934335  153921 command_runner.go:130] >     },
	I0729 12:05:51.934339  153921 command_runner.go:130] >     {
	I0729 12:05:51.934351  153921 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 12:05:51.934357  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.934365  153921 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 12:05:51.934371  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934377  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934389  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 12:05:51.934402  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 12:05:51.934411  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934418  153921 command_runner.go:130] >       "size": "750414",
	I0729 12:05:51.934426  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.934436  153921 command_runner.go:130] >         "value": "65535"
	I0729 12:05:51.934441  153921 command_runner.go:130] >       },
	I0729 12:05:51.934450  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934456  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934465  153921 command_runner.go:130] >       "pinned": true
	I0729 12:05:51.934470  153921 command_runner.go:130] >     }
	I0729 12:05:51.934477  153921 command_runner.go:130] >   ]
	I0729 12:05:51.934482  153921 command_runner.go:130] > }
	I0729 12:05:51.934714  153921 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:05:51.934733  153921 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:05:51.934788  153921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:05:51.964809  153921 command_runner.go:130] > {
	I0729 12:05:51.964835  153921 command_runner.go:130] >   "images": [
	I0729 12:05:51.964839  153921 command_runner.go:130] >     {
	I0729 12:05:51.964849  153921 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 12:05:51.964854  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.964860  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 12:05:51.964864  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964868  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.964877  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 12:05:51.964883  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 12:05:51.964886  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964891  153921 command_runner.go:130] >       "size": "87165492",
	I0729 12:05:51.964895  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.964900  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.964908  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.964913  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.964916  153921 command_runner.go:130] >     },
	I0729 12:05:51.964919  153921 command_runner.go:130] >     {
	I0729 12:05:51.964926  153921 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 12:05:51.964935  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.964939  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 12:05:51.964943  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964947  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.964955  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 12:05:51.964971  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 12:05:51.964977  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964982  153921 command_runner.go:130] >       "size": "87174707",
	I0729 12:05:51.964992  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.964998  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965002  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965006  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965009  153921 command_runner.go:130] >     },
	I0729 12:05:51.965013  153921 command_runner.go:130] >     {
	I0729 12:05:51.965019  153921 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 12:05:51.965025  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965031  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 12:05:51.965033  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965038  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965046  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 12:05:51.965053  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 12:05:51.965058  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965062  153921 command_runner.go:130] >       "size": "1363676",
	I0729 12:05:51.965066  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965070  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965075  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965081  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965087  153921 command_runner.go:130] >     },
	I0729 12:05:51.965091  153921 command_runner.go:130] >     {
	I0729 12:05:51.965097  153921 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 12:05:51.965103  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965108  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 12:05:51.965115  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965118  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965125  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 12:05:51.965138  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 12:05:51.965144  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965148  153921 command_runner.go:130] >       "size": "31470524",
	I0729 12:05:51.965151  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965155  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965160  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965166  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965169  153921 command_runner.go:130] >     },
	I0729 12:05:51.965175  153921 command_runner.go:130] >     {
	I0729 12:05:51.965181  153921 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 12:05:51.965187  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965192  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 12:05:51.965198  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965202  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965209  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 12:05:51.965218  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 12:05:51.965222  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965228  153921 command_runner.go:130] >       "size": "61245718",
	I0729 12:05:51.965232  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965236  153921 command_runner.go:130] >       "username": "nonroot",
	I0729 12:05:51.965240  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965243  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965247  153921 command_runner.go:130] >     },
	I0729 12:05:51.965250  153921 command_runner.go:130] >     {
	I0729 12:05:51.965256  153921 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 12:05:51.965262  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965266  153921 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 12:05:51.965272  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965275  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965283  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 12:05:51.965292  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 12:05:51.965297  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965303  153921 command_runner.go:130] >       "size": "150779692",
	I0729 12:05:51.965308  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965312  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965318  153921 command_runner.go:130] >       },
	I0729 12:05:51.965322  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965327  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965332  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965337  153921 command_runner.go:130] >     },
	I0729 12:05:51.965341  153921 command_runner.go:130] >     {
	I0729 12:05:51.965347  153921 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 12:05:51.965353  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965358  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 12:05:51.965363  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965368  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965378  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 12:05:51.965387  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 12:05:51.965391  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965395  153921 command_runner.go:130] >       "size": "117609954",
	I0729 12:05:51.965401  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965405  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965409  153921 command_runner.go:130] >       },
	I0729 12:05:51.965413  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965417  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965423  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965426  153921 command_runner.go:130] >     },
	I0729 12:05:51.965432  153921 command_runner.go:130] >     {
	I0729 12:05:51.965437  153921 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 12:05:51.965441  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965448  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 12:05:51.965451  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965456  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965476  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 12:05:51.965486  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 12:05:51.965489  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965493  153921 command_runner.go:130] >       "size": "112198984",
	I0729 12:05:51.965502  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965506  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965510  153921 command_runner.go:130] >       },
	I0729 12:05:51.965514  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965518  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965522  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965525  153921 command_runner.go:130] >     },
	I0729 12:05:51.965528  153921 command_runner.go:130] >     {
	I0729 12:05:51.965534  153921 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 12:05:51.965539  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965544  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 12:05:51.965549  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965553  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965560  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 12:05:51.965569  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 12:05:51.965573  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965577  153921 command_runner.go:130] >       "size": "85953945",
	I0729 12:05:51.965581  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965587  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965592  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965596  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965601  153921 command_runner.go:130] >     },
	I0729 12:05:51.965604  153921 command_runner.go:130] >     {
	I0729 12:05:51.965610  153921 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 12:05:51.965616  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965621  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 12:05:51.965626  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965630  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965637  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 12:05:51.965646  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 12:05:51.965649  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965653  153921 command_runner.go:130] >       "size": "63051080",
	I0729 12:05:51.965657  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965661  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965664  153921 command_runner.go:130] >       },
	I0729 12:05:51.965671  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965675  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965680  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965684  153921 command_runner.go:130] >     },
	I0729 12:05:51.965689  153921 command_runner.go:130] >     {
	I0729 12:05:51.965695  153921 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 12:05:51.965701  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965705  153921 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 12:05:51.965710  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965714  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965722  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 12:05:51.965731  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 12:05:51.965735  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965740  153921 command_runner.go:130] >       "size": "750414",
	I0729 12:05:51.965744  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965747  153921 command_runner.go:130] >         "value": "65535"
	I0729 12:05:51.965753  153921 command_runner.go:130] >       },
	I0729 12:05:51.965757  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965764  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965768  153921 command_runner.go:130] >       "pinned": true
	I0729 12:05:51.965771  153921 command_runner.go:130] >     }
	I0729 12:05:51.965775  153921 command_runner.go:130] >   ]
	I0729 12:05:51.965778  153921 command_runner.go:130] > }
	I0729 12:05:51.966334  153921 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:05:51.966354  153921 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:05:51.966364  153921 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.30.3 crio true true} ...
	I0729 12:05:51.966460  153921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-293807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:05:51.966529  153921 ssh_runner.go:195] Run: crio config
	I0729 12:05:52.008730  153921 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 12:05:52.008766  153921 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 12:05:52.008775  153921 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 12:05:52.008778  153921 command_runner.go:130] > #
	I0729 12:05:52.008797  153921 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 12:05:52.008806  153921 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 12:05:52.008815  153921 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 12:05:52.008828  153921 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 12:05:52.008834  153921 command_runner.go:130] > # reload'.
	I0729 12:05:52.008844  153921 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 12:05:52.008855  153921 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 12:05:52.008861  153921 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 12:05:52.008867  153921 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 12:05:52.008871  153921 command_runner.go:130] > [crio]
	I0729 12:05:52.008877  153921 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 12:05:52.008882  153921 command_runner.go:130] > # containers images, in this directory.
	I0729 12:05:52.008890  153921 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 12:05:52.008902  153921 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 12:05:52.008914  153921 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 12:05:52.008927  153921 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 12:05:52.008937  153921 command_runner.go:130] > # imagestore = ""
	I0729 12:05:52.008947  153921 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 12:05:52.008956  153921 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 12:05:52.008969  153921 command_runner.go:130] > storage_driver = "overlay"
	I0729 12:05:52.008975  153921 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 12:05:52.008982  153921 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 12:05:52.008985  153921 command_runner.go:130] > storage_option = [
	I0729 12:05:52.008990  153921 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 12:05:52.008995  153921 command_runner.go:130] > ]
	I0729 12:05:52.009001  153921 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 12:05:52.009007  153921 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 12:05:52.009015  153921 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 12:05:52.009021  153921 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 12:05:52.009039  153921 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 12:05:52.009046  153921 command_runner.go:130] > # always happen on a node reboot
	I0729 12:05:52.009056  153921 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 12:05:52.009076  153921 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 12:05:52.009088  153921 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 12:05:52.009103  153921 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 12:05:52.009114  153921 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 12:05:52.009129  153921 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 12:05:52.009142  153921 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 12:05:52.009148  153921 command_runner.go:130] > # internal_wipe = true
	I0729 12:05:52.009155  153921 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 12:05:52.009160  153921 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 12:05:52.009164  153921 command_runner.go:130] > # internal_repair = false
	I0729 12:05:52.009170  153921 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 12:05:52.009178  153921 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 12:05:52.009186  153921 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 12:05:52.009191  153921 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 12:05:52.009199  153921 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 12:05:52.009202  153921 command_runner.go:130] > [crio.api]
	I0729 12:05:52.009210  153921 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 12:05:52.009214  153921 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 12:05:52.009223  153921 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 12:05:52.009229  153921 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 12:05:52.009239  153921 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 12:05:52.009245  153921 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 12:05:52.009251  153921 command_runner.go:130] > # stream_port = "0"
	I0729 12:05:52.009259  153921 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 12:05:52.009267  153921 command_runner.go:130] > # stream_enable_tls = false
	I0729 12:05:52.009276  153921 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 12:05:52.009282  153921 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 12:05:52.009296  153921 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 12:05:52.009309  153921 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 12:05:52.009318  153921 command_runner.go:130] > # minutes.
	I0729 12:05:52.009325  153921 command_runner.go:130] > # stream_tls_cert = ""
	I0729 12:05:52.009339  153921 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 12:05:52.009352  153921 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 12:05:52.009360  153921 command_runner.go:130] > # stream_tls_key = ""
	I0729 12:05:52.009367  153921 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 12:05:52.009379  153921 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 12:05:52.009399  153921 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 12:05:52.009409  153921 command_runner.go:130] > # stream_tls_ca = ""
	I0729 12:05:52.009420  153921 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 12:05:52.009431  153921 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 12:05:52.009445  153921 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 12:05:52.009456  153921 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 12:05:52.009469  153921 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 12:05:52.009480  153921 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 12:05:52.009490  153921 command_runner.go:130] > [crio.runtime]
	I0729 12:05:52.009500  153921 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 12:05:52.009512  153921 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 12:05:52.009521  153921 command_runner.go:130] > # "nofile=1024:2048"
	I0729 12:05:52.009530  153921 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 12:05:52.009540  153921 command_runner.go:130] > # default_ulimits = [
	I0729 12:05:52.009545  153921 command_runner.go:130] > # ]
	I0729 12:05:52.009558  153921 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 12:05:52.009570  153921 command_runner.go:130] > # no_pivot = false
	I0729 12:05:52.009582  153921 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 12:05:52.009593  153921 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 12:05:52.009604  153921 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 12:05:52.009617  153921 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 12:05:52.009627  153921 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 12:05:52.009647  153921 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 12:05:52.009657  153921 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 12:05:52.009664  153921 command_runner.go:130] > # Cgroup setting for conmon
	I0729 12:05:52.009677  153921 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 12:05:52.009687  153921 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 12:05:52.009697  153921 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 12:05:52.009708  153921 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 12:05:52.009720  153921 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 12:05:52.009728  153921 command_runner.go:130] > conmon_env = [
	I0729 12:05:52.009738  153921 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 12:05:52.009747  153921 command_runner.go:130] > ]
	I0729 12:05:52.009755  153921 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 12:05:52.009767  153921 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 12:05:52.009785  153921 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 12:05:52.009795  153921 command_runner.go:130] > # default_env = [
	I0729 12:05:52.009801  153921 command_runner.go:130] > # ]
	I0729 12:05:52.009813  153921 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 12:05:52.009828  153921 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 12:05:52.009837  153921 command_runner.go:130] > # selinux = false
	I0729 12:05:52.009847  153921 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 12:05:52.009857  153921 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 12:05:52.009864  153921 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 12:05:52.009868  153921 command_runner.go:130] > # seccomp_profile = ""
	I0729 12:05:52.009876  153921 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 12:05:52.009881  153921 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 12:05:52.009889  153921 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 12:05:52.009894  153921 command_runner.go:130] > # which might increase security.
	I0729 12:05:52.009898  153921 command_runner.go:130] > # This option is currently deprecated,
	I0729 12:05:52.009909  153921 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 12:05:52.009919  153921 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 12:05:52.009930  153921 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 12:05:52.009944  153921 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 12:05:52.009956  153921 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 12:05:52.009969  153921 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 12:05:52.009979  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.009990  153921 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 12:05:52.010002  153921 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 12:05:52.010013  153921 command_runner.go:130] > # the cgroup blockio controller.
	I0729 12:05:52.010023  153921 command_runner.go:130] > # blockio_config_file = ""
	I0729 12:05:52.010033  153921 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 12:05:52.010042  153921 command_runner.go:130] > # blockio parameters.
	I0729 12:05:52.010048  153921 command_runner.go:130] > # blockio_reload = false
	I0729 12:05:52.010058  153921 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 12:05:52.010062  153921 command_runner.go:130] > # irqbalance daemon.
	I0729 12:05:52.010069  153921 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 12:05:52.010077  153921 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 12:05:52.010087  153921 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 12:05:52.010100  153921 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 12:05:52.010112  153921 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 12:05:52.010124  153921 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 12:05:52.010135  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.010144  153921 command_runner.go:130] > # rdt_config_file = ""
	I0729 12:05:52.010156  153921 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 12:05:52.010166  153921 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 12:05:52.010196  153921 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 12:05:52.010207  153921 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 12:05:52.010216  153921 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 12:05:52.010230  153921 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 12:05:52.010239  153921 command_runner.go:130] > # will be added.
	I0729 12:05:52.010246  153921 command_runner.go:130] > # default_capabilities = [
	I0729 12:05:52.010254  153921 command_runner.go:130] > # 	"CHOWN",
	I0729 12:05:52.010261  153921 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 12:05:52.010270  153921 command_runner.go:130] > # 	"FSETID",
	I0729 12:05:52.010278  153921 command_runner.go:130] > # 	"FOWNER",
	I0729 12:05:52.010287  153921 command_runner.go:130] > # 	"SETGID",
	I0729 12:05:52.010300  153921 command_runner.go:130] > # 	"SETUID",
	I0729 12:05:52.010306  153921 command_runner.go:130] > # 	"SETPCAP",
	I0729 12:05:52.010315  153921 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 12:05:52.010322  153921 command_runner.go:130] > # 	"KILL",
	I0729 12:05:52.010329  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010340  153921 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 12:05:52.010355  153921 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 12:05:52.010368  153921 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 12:05:52.010381  153921 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 12:05:52.010393  153921 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 12:05:52.010403  153921 command_runner.go:130] > default_sysctls = [
	I0729 12:05:52.010410  153921 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 12:05:52.010419  153921 command_runner.go:130] > ]
	I0729 12:05:52.010427  153921 command_runner.go:130] > # List of devices on the host that a
	I0729 12:05:52.010439  153921 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 12:05:52.010448  153921 command_runner.go:130] > # allowed_devices = [
	I0729 12:05:52.010456  153921 command_runner.go:130] > # 	"/dev/fuse",
	I0729 12:05:52.010465  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010473  153921 command_runner.go:130] > # List of additional devices. specified as
	I0729 12:05:52.010487  153921 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 12:05:52.010498  153921 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 12:05:52.010507  153921 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 12:05:52.010518  153921 command_runner.go:130] > # additional_devices = [
	I0729 12:05:52.010524  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010535  153921 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 12:05:52.010543  153921 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 12:05:52.010550  153921 command_runner.go:130] > # 	"/etc/cdi",
	I0729 12:05:52.010559  153921 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 12:05:52.010568  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010577  153921 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 12:05:52.010589  153921 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 12:05:52.010599  153921 command_runner.go:130] > # Defaults to false.
	I0729 12:05:52.010608  153921 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 12:05:52.010621  153921 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 12:05:52.010634  153921 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 12:05:52.010643  153921 command_runner.go:130] > # hooks_dir = [
	I0729 12:05:52.010651  153921 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 12:05:52.010659  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010669  153921 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 12:05:52.010682  153921 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 12:05:52.010693  153921 command_runner.go:130] > # its default mounts from the following two files:
	I0729 12:05:52.010701  153921 command_runner.go:130] > #
	I0729 12:05:52.010710  153921 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 12:05:52.010725  153921 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 12:05:52.010736  153921 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 12:05:52.010743  153921 command_runner.go:130] > #
	I0729 12:05:52.010753  153921 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 12:05:52.010767  153921 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 12:05:52.010780  153921 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 12:05:52.010794  153921 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 12:05:52.010799  153921 command_runner.go:130] > #
	I0729 12:05:52.010810  153921 command_runner.go:130] > # default_mounts_file = ""
	I0729 12:05:52.010820  153921 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 12:05:52.010833  153921 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 12:05:52.010843  153921 command_runner.go:130] > pids_limit = 1024
	I0729 12:05:52.010853  153921 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 12:05:52.010865  153921 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 12:05:52.010877  153921 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 12:05:52.010887  153921 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 12:05:52.010891  153921 command_runner.go:130] > # log_size_max = -1
	I0729 12:05:52.010897  153921 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 12:05:52.010904  153921 command_runner.go:130] > # log_to_journald = false
	I0729 12:05:52.010909  153921 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 12:05:52.010916  153921 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 12:05:52.010921  153921 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 12:05:52.010932  153921 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 12:05:52.010941  153921 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 12:05:52.010951  153921 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 12:05:52.010961  153921 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 12:05:52.010971  153921 command_runner.go:130] > # read_only = false
	I0729 12:05:52.010980  153921 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 12:05:52.010994  153921 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 12:05:52.011004  153921 command_runner.go:130] > # live configuration reload.
	I0729 12:05:52.011011  153921 command_runner.go:130] > # log_level = "info"
	I0729 12:05:52.011022  153921 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 12:05:52.011033  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.011040  153921 command_runner.go:130] > # log_filter = ""
	I0729 12:05:52.011053  153921 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 12:05:52.011065  153921 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 12:05:52.011072  153921 command_runner.go:130] > # separated by comma.
	I0729 12:05:52.011088  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011098  153921 command_runner.go:130] > # uid_mappings = ""
	I0729 12:05:52.011111  153921 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 12:05:52.011125  153921 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 12:05:52.011134  153921 command_runner.go:130] > # separated by comma.
	I0729 12:05:52.011146  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011156  153921 command_runner.go:130] > # gid_mappings = ""
	I0729 12:05:52.011166  153921 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 12:05:52.011180  153921 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 12:05:52.011194  153921 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 12:05:52.011205  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011224  153921 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 12:05:52.011238  153921 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 12:05:52.011251  153921 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 12:05:52.011262  153921 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 12:05:52.011277  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011286  153921 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 12:05:52.011297  153921 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 12:05:52.011310  153921 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 12:05:52.011322  153921 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 12:05:52.011332  153921 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 12:05:52.011341  153921 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 12:05:52.011354  153921 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 12:05:52.011365  153921 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 12:05:52.011376  153921 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 12:05:52.011384  153921 command_runner.go:130] > drop_infra_ctr = false
	I0729 12:05:52.011395  153921 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 12:05:52.011407  153921 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 12:05:52.011420  153921 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 12:05:52.011429  153921 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 12:05:52.011441  153921 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 12:05:52.011454  153921 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 12:05:52.011464  153921 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 12:05:52.011475  153921 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 12:05:52.011482  153921 command_runner.go:130] > # shared_cpuset = ""
	I0729 12:05:52.011493  153921 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 12:05:52.011503  153921 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 12:05:52.011508  153921 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 12:05:52.011518  153921 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 12:05:52.011522  153921 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 12:05:52.011530  153921 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 12:05:52.011538  153921 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 12:05:52.011543  153921 command_runner.go:130] > # enable_criu_support = false
	I0729 12:05:52.011550  153921 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 12:05:52.011557  153921 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 12:05:52.011564  153921 command_runner.go:130] > # enable_pod_events = false
	I0729 12:05:52.011570  153921 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 12:05:52.011578  153921 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 12:05:52.011585  153921 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 12:05:52.011591  153921 command_runner.go:130] > # default_runtime = "runc"
	I0729 12:05:52.011598  153921 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 12:05:52.011605  153921 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 12:05:52.011616  153921 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 12:05:52.011623  153921 command_runner.go:130] > # creation as a file is not desired either.
	I0729 12:05:52.011631  153921 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 12:05:52.011638  153921 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 12:05:52.011643  153921 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 12:05:52.011647  153921 command_runner.go:130] > # ]
	I0729 12:05:52.011653  153921 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 12:05:52.011662  153921 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 12:05:52.011667  153921 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 12:05:52.011674  153921 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 12:05:52.011678  153921 command_runner.go:130] > #
	I0729 12:05:52.011683  153921 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 12:05:52.011690  153921 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 12:05:52.011709  153921 command_runner.go:130] > # runtime_type = "oci"
	I0729 12:05:52.011716  153921 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 12:05:52.011721  153921 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 12:05:52.011726  153921 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 12:05:52.011730  153921 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 12:05:52.011736  153921 command_runner.go:130] > # monitor_env = []
	I0729 12:05:52.011740  153921 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 12:05:52.011745  153921 command_runner.go:130] > # allowed_annotations = []
	I0729 12:05:52.011751  153921 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 12:05:52.011756  153921 command_runner.go:130] > # Where:
	I0729 12:05:52.011762  153921 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 12:05:52.011770  153921 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 12:05:52.011776  153921 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 12:05:52.011788  153921 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 12:05:52.011792  153921 command_runner.go:130] > #   in $PATH.
	I0729 12:05:52.011799  153921 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 12:05:52.011806  153921 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 12:05:52.011812  153921 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 12:05:52.011818  153921 command_runner.go:130] > #   state.
	I0729 12:05:52.011825  153921 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 12:05:52.011833  153921 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 12:05:52.011839  153921 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 12:05:52.011846  153921 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 12:05:52.011852  153921 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 12:05:52.011860  153921 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 12:05:52.011866  153921 command_runner.go:130] > #   The currently recognized values are:
	I0729 12:05:52.011872  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 12:05:52.011881  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 12:05:52.011889  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 12:05:52.011897  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 12:05:52.011904  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 12:05:52.011912  153921 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 12:05:52.011918  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 12:05:52.011926  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 12:05:52.011932  153921 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 12:05:52.011940  153921 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 12:05:52.011944  153921 command_runner.go:130] > #   deprecated option "conmon".
	I0729 12:05:52.011951  153921 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 12:05:52.011956  153921 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 12:05:52.011964  153921 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 12:05:52.011969  153921 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 12:05:52.011978  153921 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 12:05:52.011983  153921 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 12:05:52.011988  153921 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 12:05:52.011995  153921 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 12:05:52.011998  153921 command_runner.go:130] > #
	I0729 12:05:52.012003  153921 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 12:05:52.012006  153921 command_runner.go:130] > #
	I0729 12:05:52.012012  153921 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 12:05:52.012018  153921 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 12:05:52.012022  153921 command_runner.go:130] > #
	I0729 12:05:52.012028  153921 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 12:05:52.012036  153921 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 12:05:52.012039  153921 command_runner.go:130] > #
	I0729 12:05:52.012046  153921 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 12:05:52.012050  153921 command_runner.go:130] > # feature.
	I0729 12:05:52.012056  153921 command_runner.go:130] > #
	I0729 12:05:52.012061  153921 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 12:05:52.012069  153921 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 12:05:52.012076  153921 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 12:05:52.012084  153921 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 12:05:52.012090  153921 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 12:05:52.012095  153921 command_runner.go:130] > #
	I0729 12:05:52.012101  153921 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 12:05:52.012108  153921 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 12:05:52.012111  153921 command_runner.go:130] > #
	I0729 12:05:52.012117  153921 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 12:05:52.012124  153921 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 12:05:52.012127  153921 command_runner.go:130] > #
	I0729 12:05:52.012135  153921 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 12:05:52.012141  153921 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 12:05:52.012146  153921 command_runner.go:130] > # limitation.
	I0729 12:05:52.012151  153921 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 12:05:52.012158  153921 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 12:05:52.012161  153921 command_runner.go:130] > runtime_type = "oci"
	I0729 12:05:52.012166  153921 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 12:05:52.012172  153921 command_runner.go:130] > runtime_config_path = ""
	I0729 12:05:52.012177  153921 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 12:05:52.012182  153921 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 12:05:52.012185  153921 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 12:05:52.012191  153921 command_runner.go:130] > monitor_env = [
	I0729 12:05:52.012197  153921 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 12:05:52.012202  153921 command_runner.go:130] > ]
	I0729 12:05:52.012206  153921 command_runner.go:130] > privileged_without_host_devices = false
	I0729 12:05:52.012213  153921 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 12:05:52.012221  153921 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 12:05:52.012227  153921 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 12:05:52.012237  153921 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 12:05:52.012246  153921 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 12:05:52.012252  153921 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 12:05:52.012261  153921 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 12:05:52.012268  153921 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 12:05:52.012275  153921 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 12:05:52.012282  153921 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 12:05:52.012287  153921 command_runner.go:130] > # Example:
	I0729 12:05:52.012292  153921 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 12:05:52.012297  153921 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 12:05:52.012301  153921 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 12:05:52.012308  153921 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 12:05:52.012311  153921 command_runner.go:130] > # cpuset = 0
	I0729 12:05:52.012315  153921 command_runner.go:130] > # cpushares = "0-1"
	I0729 12:05:52.012318  153921 command_runner.go:130] > # Where:
	I0729 12:05:52.012323  153921 command_runner.go:130] > # The workload name is workload-type.
	I0729 12:05:52.012329  153921 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 12:05:52.012334  153921 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 12:05:52.012339  153921 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 12:05:52.012346  153921 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 12:05:52.012351  153921 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 12:05:52.012356  153921 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 12:05:52.012362  153921 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 12:05:52.012366  153921 command_runner.go:130] > # Default value is set to true
	I0729 12:05:52.012371  153921 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 12:05:52.012377  153921 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 12:05:52.012382  153921 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 12:05:52.012386  153921 command_runner.go:130] > # Default value is set to 'false'
	I0729 12:05:52.012389  153921 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 12:05:52.012395  153921 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 12:05:52.012398  153921 command_runner.go:130] > #
	I0729 12:05:52.012403  153921 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 12:05:52.012408  153921 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 12:05:52.012414  153921 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 12:05:52.012420  153921 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 12:05:52.012425  153921 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 12:05:52.012429  153921 command_runner.go:130] > [crio.image]
	I0729 12:05:52.012435  153921 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 12:05:52.012439  153921 command_runner.go:130] > # default_transport = "docker://"
	I0729 12:05:52.012444  153921 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 12:05:52.012450  153921 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 12:05:52.012454  153921 command_runner.go:130] > # global_auth_file = ""
	I0729 12:05:52.012459  153921 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 12:05:52.012463  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.012468  153921 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 12:05:52.012473  153921 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 12:05:52.012481  153921 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 12:05:52.012485  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.012489  153921 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 12:05:52.012495  153921 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 12:05:52.012500  153921 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 12:05:52.012506  153921 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 12:05:52.012511  153921 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 12:05:52.012514  153921 command_runner.go:130] > # pause_command = "/pause"
	I0729 12:05:52.012520  153921 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 12:05:52.012526  153921 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 12:05:52.012532  153921 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 12:05:52.012539  153921 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 12:05:52.012547  153921 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 12:05:52.012553  153921 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 12:05:52.012560  153921 command_runner.go:130] > # pinned_images = [
	I0729 12:05:52.012563  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012569  153921 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 12:05:52.012577  153921 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 12:05:52.012584  153921 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 12:05:52.012592  153921 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 12:05:52.012598  153921 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 12:05:52.012602  153921 command_runner.go:130] > # signature_policy = ""
	I0729 12:05:52.012607  153921 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 12:05:52.012616  153921 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 12:05:52.012622  153921 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 12:05:52.012630  153921 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 12:05:52.012637  153921 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 12:05:52.012644  153921 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 12:05:52.012650  153921 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 12:05:52.012657  153921 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 12:05:52.012661  153921 command_runner.go:130] > # changing them here.
	I0729 12:05:52.012667  153921 command_runner.go:130] > # insecure_registries = [
	I0729 12:05:52.012671  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012677  153921 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 12:05:52.012683  153921 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 12:05:52.012688  153921 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 12:05:52.012695  153921 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 12:05:52.012699  153921 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 12:05:52.012707  153921 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 12:05:52.012711  153921 command_runner.go:130] > # CNI plugins.
	I0729 12:05:52.012714  153921 command_runner.go:130] > [crio.network]
	I0729 12:05:52.012720  153921 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 12:05:52.012727  153921 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 12:05:52.012731  153921 command_runner.go:130] > # cni_default_network = ""
	I0729 12:05:52.012739  153921 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 12:05:52.012743  153921 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 12:05:52.012751  153921 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 12:05:52.012755  153921 command_runner.go:130] > # plugin_dirs = [
	I0729 12:05:52.012761  153921 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 12:05:52.012763  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012769  153921 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 12:05:52.012773  153921 command_runner.go:130] > [crio.metrics]
	I0729 12:05:52.012779  153921 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 12:05:52.012787  153921 command_runner.go:130] > enable_metrics = true
	I0729 12:05:52.012791  153921 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 12:05:52.012798  153921 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 12:05:52.012806  153921 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 12:05:52.012811  153921 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 12:05:52.012817  153921 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 12:05:52.012823  153921 command_runner.go:130] > # metrics_collectors = [
	I0729 12:05:52.012827  153921 command_runner.go:130] > # 	"operations",
	I0729 12:05:52.012832  153921 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 12:05:52.012839  153921 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 12:05:52.012844  153921 command_runner.go:130] > # 	"operations_errors",
	I0729 12:05:52.012848  153921 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 12:05:52.012853  153921 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 12:05:52.012857  153921 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 12:05:52.012862  153921 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 12:05:52.012869  153921 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 12:05:52.012873  153921 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 12:05:52.012880  153921 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 12:05:52.012885  153921 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 12:05:52.012891  153921 command_runner.go:130] > # 	"containers_oom_total",
	I0729 12:05:52.012895  153921 command_runner.go:130] > # 	"containers_oom",
	I0729 12:05:52.012901  153921 command_runner.go:130] > # 	"processes_defunct",
	I0729 12:05:52.012905  153921 command_runner.go:130] > # 	"operations_total",
	I0729 12:05:52.012909  153921 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 12:05:52.012914  153921 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 12:05:52.012920  153921 command_runner.go:130] > # 	"operations_errors_total",
	I0729 12:05:52.012924  153921 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 12:05:52.012928  153921 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 12:05:52.012932  153921 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 12:05:52.012939  153921 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 12:05:52.012943  153921 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 12:05:52.012950  153921 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 12:05:52.012955  153921 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 12:05:52.012979  153921 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 12:05:52.012988  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012996  153921 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 12:05:52.013005  153921 command_runner.go:130] > # metrics_port = 9090
	I0729 12:05:52.013011  153921 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 12:05:52.013018  153921 command_runner.go:130] > # metrics_socket = ""
	I0729 12:05:52.013023  153921 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 12:05:52.013031  153921 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 12:05:52.013038  153921 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 12:05:52.013045  153921 command_runner.go:130] > # certificate on any modification event.
	I0729 12:05:52.013049  153921 command_runner.go:130] > # metrics_cert = ""
	I0729 12:05:52.013056  153921 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 12:05:52.013062  153921 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 12:05:52.013068  153921 command_runner.go:130] > # metrics_key = ""
	I0729 12:05:52.013073  153921 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 12:05:52.013079  153921 command_runner.go:130] > [crio.tracing]
	I0729 12:05:52.013084  153921 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 12:05:52.013090  153921 command_runner.go:130] > # enable_tracing = false
	I0729 12:05:52.013095  153921 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 12:05:52.013102  153921 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 12:05:52.013108  153921 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 12:05:52.013115  153921 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 12:05:52.013120  153921 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 12:05:52.013125  153921 command_runner.go:130] > [crio.nri]
	I0729 12:05:52.013130  153921 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 12:05:52.013136  153921 command_runner.go:130] > # enable_nri = false
	I0729 12:05:52.013140  153921 command_runner.go:130] > # NRI socket to listen on.
	I0729 12:05:52.013264  153921 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 12:05:52.013285  153921 command_runner.go:130] > # NRI plugin directory to use.
	I0729 12:05:52.013292  153921 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 12:05:52.013298  153921 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 12:05:52.013312  153921 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 12:05:52.013324  153921 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 12:05:52.013333  153921 command_runner.go:130] > # nri_disable_connections = false
	I0729 12:05:52.013341  153921 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 12:05:52.013417  153921 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 12:05:52.013432  153921 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 12:05:52.013440  153921 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 12:05:52.013449  153921 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 12:05:52.013457  153921 command_runner.go:130] > [crio.stats]
	I0729 12:05:52.013463  153921 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 12:05:52.013471  153921 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 12:05:52.013476  153921 command_runner.go:130] > # stats_collection_period = 0
	I0729 12:05:52.013500  153921 command_runner.go:130] ! time="2024-07-29 12:05:51.972545700Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 12:05:52.013534  153921 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 12:05:52.013666  153921 cni.go:84] Creating CNI manager for ""
	I0729 12:05:52.013678  153921 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 12:05:52.013688  153921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:05:52.013736  153921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-293807 NodeName:multinode-293807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:05:52.013889  153921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-293807"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:05:52.013952  153921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:05:52.023560  153921 command_runner.go:130] > kubeadm
	I0729 12:05:52.023585  153921 command_runner.go:130] > kubectl
	I0729 12:05:52.023592  153921 command_runner.go:130] > kubelet
	I0729 12:05:52.023636  153921 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:05:52.023697  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:05:52.033341  153921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0729 12:05:52.050554  153921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:05:52.067704  153921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 12:05:52.084624  153921 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0729 12:05:52.088472  153921 command_runner.go:130] > 192.168.39.26	control-plane.minikube.internal
	I0729 12:05:52.088556  153921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:05:52.231649  153921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:05:52.246565  153921 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807 for IP: 192.168.39.26
	I0729 12:05:52.246593  153921 certs.go:194] generating shared ca certs ...
	I0729 12:05:52.246613  153921 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:05:52.246802  153921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 12:05:52.246857  153921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 12:05:52.246871  153921 certs.go:256] generating profile certs ...
	I0729 12:05:52.246968  153921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/client.key
	I0729 12:05:52.247047  153921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.key.e2d5216b
	I0729 12:05:52.247097  153921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.key
	I0729 12:05:52.247111  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:05:52.247131  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:05:52.247148  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:05:52.247165  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:05:52.247182  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:05:52.247201  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:05:52.247220  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:05:52.247236  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:05:52.247302  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 12:05:52.247345  153921 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 12:05:52.247357  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 12:05:52.247396  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:05:52.247429  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:05:52.247459  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 12:05:52.247514  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:05:52.247560  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.247580  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.247598  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.248297  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:05:52.272716  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:05:52.296570  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:05:52.321612  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:05:52.346351  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 12:05:52.371499  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 12:05:52.396122  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:05:52.420978  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:05:52.446331  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:05:52.470989  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 12:05:52.494836  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 12:05:52.518812  153921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:05:52.535666  153921 ssh_runner.go:195] Run: openssl version
	I0729 12:05:52.541263  153921 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 12:05:52.541501  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 12:05:52.552515  153921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.556987  153921 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.557106  153921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.557168  153921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.562746  153921 command_runner.go:130] > 51391683
	I0729 12:05:52.562832  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 12:05:52.575448  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 12:05:52.600345  153921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.604749  153921 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.604879  153921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.604928  153921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.610627  153921 command_runner.go:130] > 3ec20f2e
	I0729 12:05:52.610704  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:05:52.620369  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:05:52.631361  153921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.635818  153921 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.635852  153921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.635905  153921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.641373  153921 command_runner.go:130] > b5213941
	I0729 12:05:52.641550  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:05:52.651087  153921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:05:52.655603  153921 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:05:52.655632  153921 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 12:05:52.655641  153921 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0729 12:05:52.655650  153921 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 12:05:52.655658  153921 command_runner.go:130] > Access: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655665  153921 command_runner.go:130] > Modify: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655672  153921 command_runner.go:130] > Change: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655677  153921 command_runner.go:130] >  Birth: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655755  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:05:52.661401  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.661493  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:05:52.667374  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.667463  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:05:52.673324  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.673520  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:05:52.679202  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.679293  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:05:52.684920  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.685019  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:05:52.690564  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.690645  153921 kubeadm.go:392] StartCluster: {Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:05:52.690824  153921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:05:52.690903  153921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:05:52.725919  153921 command_runner.go:130] > c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf
	I0729 12:05:52.725953  153921 command_runner.go:130] > 8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728
	I0729 12:05:52.725963  153921 command_runner.go:130] > 3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97
	I0729 12:05:52.725975  153921 command_runner.go:130] > 8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7
	I0729 12:05:52.725983  153921 command_runner.go:130] > 6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7
	I0729 12:05:52.726028  153921 command_runner.go:130] > df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c
	I0729 12:05:52.726047  153921 command_runner.go:130] > fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4
	I0729 12:05:52.726114  153921 command_runner.go:130] > 876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f
	I0729 12:05:52.727682  153921 cri.go:89] found id: "c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf"
	I0729 12:05:52.727701  153921 cri.go:89] found id: "8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728"
	I0729 12:05:52.727706  153921 cri.go:89] found id: "3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97"
	I0729 12:05:52.727711  153921 cri.go:89] found id: "8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7"
	I0729 12:05:52.727714  153921 cri.go:89] found id: "6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7"
	I0729 12:05:52.727719  153921 cri.go:89] found id: "df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c"
	I0729 12:05:52.727722  153921 cri.go:89] found id: "fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4"
	I0729 12:05:52.727726  153921 cri.go:89] found id: "876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f"
	I0729 12:05:52.727730  153921 cri.go:89] found id: ""
	I0729 12:05:52.727794  153921 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.907094760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254858907065091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c484d8b-0d7f-4694-833c-45d8fa31e314 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.907948440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3992915-f11c-4154-abf9-872608a5f147 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.908022005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3992915-f11c-4154-abf9-872608a5f147 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.908367536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3992915-f11c-4154-abf9-872608a5f147 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.949721626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a452383-892a-4834-8bc4-a1066d9ac9ce name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.949820183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a452383-892a-4834-8bc4-a1066d9ac9ce name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.951104304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05d6dda4-7bc4-4472-96f5-0114fadb78f8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.951574623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254858951546265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05d6dda4-7bc4-4472-96f5-0114fadb78f8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.952011294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a397184-9f8e-4c70-a2b1-27d49403eb10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.952078206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a397184-9f8e-4c70-a2b1-27d49403eb10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.952409723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a397184-9f8e-4c70-a2b1-27d49403eb10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.991397734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abb4ca37-1417-47b5-a959-3a90ceb29316 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.991580580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abb4ca37-1417-47b5-a959-3a90ceb29316 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.992824198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=080716a3-6868-426b-af41-57f52dc524ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.993235652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254858993212708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=080716a3-6868-426b-af41-57f52dc524ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.993865856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6492a18-6e74-4b39-9ed3-aa0d9c3381c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.993923527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6492a18-6e74-4b39-9ed3-aa0d9c3381c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:38 multinode-293807 crio[2878]: time="2024-07-29 12:07:38.994246899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6492a18-6e74-4b39-9ed3-aa0d9c3381c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:39 multinode-293807 crio[2878]: time="2024-07-29 12:07:39.043152104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87e86906-cad0-47c5-9d5e-e2e3a780e58a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:39 multinode-293807 crio[2878]: time="2024-07-29 12:07:39.043232754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87e86906-cad0-47c5-9d5e-e2e3a780e58a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:07:39 multinode-293807 crio[2878]: time="2024-07-29 12:07:39.044948356Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0b141b1-8a27-493e-8a33-180f0eb1c7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:39 multinode-293807 crio[2878]: time="2024-07-29 12:07:39.045557284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722254859045409374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0b141b1-8a27-493e-8a33-180f0eb1c7e7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:07:39 multinode-293807 crio[2878]: time="2024-07-29 12:07:39.049214510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee7abf19-c020-42a2-82fd-1de0515aa88d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:39 multinode-293807 crio[2878]: time="2024-07-29 12:07:39.049375201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee7abf19-c020-42a2-82fd-1de0515aa88d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:07:39 multinode-293807 crio[2878]: time="2024-07-29 12:07:39.050593285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee7abf19-c020-42a2-82fd-1de0515aa88d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8f69c067b6438       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9f404395fcb14       busybox-fc5497c4f-tzhl8
	90ba73282e10b       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   8fd2eab287847       kindnet-z96j2
	2d6f6b69bf3ee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   f9da110c3037e       coredns-7db6d8ff4d-w4vb7
	9603547d9c6e2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   9576a6db9cf5f       kube-proxy-5z2jx
	f7a5c8d6c2aa8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   241012740a341       storage-provisioner
	7737deecc681c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   5c21c4684ed60       kube-controller-manager-multinode-293807
	87435c2f87aa6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   685c423ee7886       kube-apiserver-multinode-293807
	e3ad017663bf3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   3c5be8bf6408e       etcd-multinode-293807
	ebb577a51515f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   eddda9de63fea       kube-scheduler-multinode-293807
	3c02cce176e08       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   0910f59549a24       busybox-fc5497c4f-tzhl8
	c1b0f5bdafedb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   e275b8d2f7084       coredns-7db6d8ff4d-w4vb7
	8746d4a660dc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   07bdd82b9a9b8       storage-provisioner
	3afb71673c939       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   a732bac4807fa       kindnet-z96j2
	8e90b9960f92b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   2c256554ef1e4       kube-proxy-5z2jx
	6b5caf26b3818       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   482c57be2aacd       kube-scheduler-multinode-293807
	df5165ac9d720       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   87de68fefdcce       etcd-multinode-293807
	fd4b90fabffac       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   1356ca0a9f891       kube-controller-manager-multinode-293807
	876b71f991cdd       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   df3427fe72c07       kube-apiserver-multinode-293807
	
	
	==> coredns [2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46783 - 40276 "HINFO IN 6339179047588870057.1204484978150539655. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027646527s
	
	
	==> coredns [c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf] <==
	[INFO] 10.244.1.2:57435 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001720036s
	[INFO] 10.244.1.2:52911 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076885s
	[INFO] 10.244.1.2:51395 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056026s
	[INFO] 10.244.1.2:45677 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158027s
	[INFO] 10.244.1.2:39978 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069551s
	[INFO] 10.244.1.2:35866 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057902s
	[INFO] 10.244.1.2:41919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047666s
	[INFO] 10.244.0.3:51370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100434s
	[INFO] 10.244.0.3:57049 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000047853s
	[INFO] 10.244.0.3:51525 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083359s
	[INFO] 10.244.0.3:37573 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045916s
	[INFO] 10.244.1.2:52000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001752s
	[INFO] 10.244.1.2:52490 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067196s
	[INFO] 10.244.1.2:41028 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055802s
	[INFO] 10.244.1.2:60965 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053382s
	[INFO] 10.244.0.3:42163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013759s
	[INFO] 10.244.0.3:36364 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064566s
	[INFO] 10.244.0.3:56065 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000055972s
	[INFO] 10.244.0.3:57361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054674s
	[INFO] 10.244.1.2:58076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134849s
	[INFO] 10.244.1.2:53602 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147888s
	[INFO] 10.244.1.2:51496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079398s
	[INFO] 10.244.1.2:52210 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077452s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-293807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-293807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=multinode-293807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_59_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:59:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-293807
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:07:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    multinode-293807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa53a409ea1943e5bd1c7340d912bf1e
	  System UUID:                fa53a409-ea19-43e5-bd1c-7340d912bf1e
	  Boot ID:                    b3c0e91e-14f7-48ce-9b0a-53c67b3e5c58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tzhl8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 coredns-7db6d8ff4d-w4vb7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m7s
	  kube-system                 etcd-multinode-293807                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m21s
	  kube-system                 kindnet-z96j2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m8s
	  kube-system                 kube-apiserver-multinode-293807             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-controller-manager-multinode-293807    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-proxy-5z2jx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-scheduler-multinode-293807             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m5s                   kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m27s (x8 over 8m27s)  kubelet          Node multinode-293807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x8 over 8m27s)  kubelet          Node multinode-293807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x7 over 8m27s)  kubelet          Node multinode-293807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m21s                  kubelet          Node multinode-293807 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m21s                  kubelet          Node multinode-293807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m21s                  kubelet          Node multinode-293807 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m8s                   node-controller  Node multinode-293807 event: Registered Node multinode-293807 in Controller
	  Normal  NodeReady                7m52s                  kubelet          Node multinode-293807 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)    kubelet          Node multinode-293807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)    kubelet          Node multinode-293807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)    kubelet          Node multinode-293807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                    node-controller  Node multinode-293807 event: Registered Node multinode-293807 in Controller
	
	
	Name:               multinode-293807-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-293807-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=multinode-293807
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_06_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:06:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-293807-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:07:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:06:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:06:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:06:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:06:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    multinode-293807-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 15392dc16ca04679b94d635da7e15880
	  System UUID:                15392dc1-6ca0-4679-b94d-635da7e15880
	  Boot ID:                    c920ba33-518d-4627-8729-cf0e88483791
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rjb65    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-8shlp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m24s
	  kube-system                 kube-proxy-gnh9j           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m19s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m24s (x2 over 7m24s)  kubelet     Node multinode-293807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m24s (x2 over 7m24s)  kubelet     Node multinode-293807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m24s (x2 over 7m24s)  kubelet     Node multinode-293807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m5s                   kubelet     Node multinode-293807-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-293807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-293807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-293807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-293807-m02 status is now: NodeReady
	
	
	Name:               multinode-293807-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-293807-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=multinode-293807
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_07_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:07:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-293807-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:07:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:07:36 +0000   Mon, 29 Jul 2024 12:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:07:36 +0000   Mon, 29 Jul 2024 12:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:07:36 +0000   Mon, 29 Jul 2024 12:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:07:36 +0000   Mon, 29 Jul 2024 12:07:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    multinode-293807-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a2acfa83d37a4b4a99e6bb1a4f841b6e
	  System UUID:                a2acfa83-d37a-4b4a-99e6-bb1a4f841b6e
	  Boot ID:                    38fde5da-9aa3-446b-902c-3b944a5815fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6x7h4       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m33s
	  kube-system                 kube-proxy-qdd9t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m33s (x2 over 6m33s)  kubelet          Node multinode-293807-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s (x2 over 6m33s)  kubelet          Node multinode-293807-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s (x2 over 6m33s)  kubelet          Node multinode-293807-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet          Node multinode-293807-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-293807-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-293807-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-293807-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m26s                  kubelet          Node multinode-293807-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-293807-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-293807-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-293807-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                    node-controller  Node multinode-293807-m03 event: Registered Node multinode-293807-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-293807-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056874] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058077] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.160249] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.137247] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.257038] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.033608] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.351109] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.064620] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.988283] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.079563] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.110288] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[  +0.124963] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.343772] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 12:00] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 12:05] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.146130] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.181244] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.137269] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.285771] systemd-fstab-generator[2863]: Ignoring "noauto" option for root device
	[  +0.986999] systemd-fstab-generator[2961]: Ignoring "noauto" option for root device
	[  +2.058183] systemd-fstab-generator[3086]: Ignoring "noauto" option for root device
	[  +4.625212] kauditd_printk_skb: 184 callbacks suppressed
	[Jul29 12:06] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.548241] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[ +18.216685] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c] <==
	{"level":"info","ts":"2024-07-29T11:59:13.622253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T12:00:15.400263Z","caller":"traceutil/trace.go:171","msg":"trace[1514114236] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"211.272224ms","start":"2024-07-29T12:00:15.188967Z","end":"2024-07-29T12:00:15.400239Z","steps":["trace[1514114236] 'process raft request'  (duration: 208.358939ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:00:15.403696Z","caller":"traceutil/trace.go:171","msg":"trace[1213000608] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"170.391755ms","start":"2024-07-29T12:00:15.233292Z","end":"2024-07-29T12:00:15.403684Z","steps":["trace[1213000608] 'process raft request'  (duration: 170.066271ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:00:24.403143Z","caller":"traceutil/trace.go:171","msg":"trace[902457188] linearizableReadLoop","detail":"{readStateIndex:551; appliedIndex:550; }","duration":"174.917552ms","start":"2024-07-29T12:00:24.228204Z","end":"2024-07-29T12:00:24.403122Z","steps":["trace[902457188] 'read index received'  (duration: 174.743556ms)","trace[902457188] 'applied index is now lower than readState.Index'  (duration: 173.237µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:00:24.403326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.101172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-29T12:00:24.403461Z","caller":"traceutil/trace.go:171","msg":"trace[549595892] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:524; }","duration":"175.249269ms","start":"2024-07-29T12:00:24.2282Z","end":"2024-07-29T12:00:24.403449Z","steps":["trace[549595892] 'agreement among raft nodes before linearized reading'  (duration: 175.074452ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:00:24.40348Z","caller":"traceutil/trace.go:171","msg":"trace[1192560475] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"186.331276ms","start":"2024-07-29T12:00:24.217135Z","end":"2024-07-29T12:00:24.403466Z","steps":["trace[1192560475] 'process raft request'  (duration: 185.85595ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:00:24.673725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.563496ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12938156726643087140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:520 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T12:00:24.67411Z","caller":"traceutil/trace.go:171","msg":"trace[1435156300] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"266.199725ms","start":"2024-07-29T12:00:24.407898Z","end":"2024-07-29T12:00:24.674098Z","steps":["trace[1435156300] 'process raft request'  (duration: 54.89868ms)","trace[1435156300] 'compare'  (duration: 210.339967ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:00:24.674475Z","caller":"traceutil/trace.go:171","msg":"trace[1197836704] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"265.217215ms","start":"2024-07-29T12:00:24.409248Z","end":"2024-07-29T12:00:24.674465Z","steps":["trace[1197836704] 'process raft request'  (duration: 264.628244ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:01:06.689934Z","caller":"traceutil/trace.go:171","msg":"trace[1219340090] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:642; }","duration":"151.346381ms","start":"2024-07-29T12:01:06.538572Z","end":"2024-07-29T12:01:06.689919Z","steps":["trace[1219340090] 'read index received'  (duration: 54.410411ms)","trace[1219340090] 'applied index is now lower than readState.Index'  (duration: 96.935147ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:01:06.690053Z","caller":"traceutil/trace.go:171","msg":"trace[1226795760] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"219.875855ms","start":"2024-07-29T12:01:06.47017Z","end":"2024-07-29T12:01:06.690045Z","steps":["trace[1226795760] 'process raft request'  (duration: 122.804216ms)","trace[1226795760] 'compare'  (duration: 96.827401ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:01:06.690214Z","caller":"traceutil/trace.go:171","msg":"trace[2129257977] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"192.560014ms","start":"2024-07-29T12:01:06.497648Z","end":"2024-07-29T12:01:06.690208Z","steps":["trace[2129257977] 'process raft request'  (duration: 192.242159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:01:06.690343Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.770711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-293807-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T12:01:06.690381Z","caller":"traceutil/trace.go:171","msg":"trace[888898775] range","detail":"{range_begin:/registry/minions/multinode-293807-m03; range_end:; response_count:1; response_revision:609; }","duration":"151.844472ms","start":"2024-07-29T12:01:06.538529Z","end":"2024-07-29T12:01:06.690374Z","steps":["trace[888898775] 'agreement among raft nodes before linearized reading'  (duration: 151.752389ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:04:19.007037Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T12:04:19.007096Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-293807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	{"level":"warn","ts":"2024-07-29T12:04:19.007224Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:04:19.007308Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:04:19.093614Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:04:19.093701Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:04:19.093756Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9867c1935b8b38d","current-leader-member-id":"c9867c1935b8b38d"}
	{"level":"info","ts":"2024-07-29T12:04:19.096577Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:04:19.096762Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:04:19.096804Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-293807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	
	
	==> etcd [e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8] <==
	{"level":"info","ts":"2024-07-29T12:05:55.428582Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:05:55.42861Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:05:55.428919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d switched to configuration voters=(14521430496220066701)"}
	{"level":"info","ts":"2024-07-29T12:05:55.429023Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","added-peer-id":"c9867c1935b8b38d","added-peer-peer-urls":["https://192.168.39.26:2380"]}
	{"level":"info","ts":"2024-07-29T12:05:55.429156Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:05:55.429227Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:05:55.433513Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:05:55.437752Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c9867c1935b8b38d","initial-advertise-peer-urls":["https://192.168.39.26:2380"],"listen-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.26:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T12:05:55.437814Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T12:05:55.437902Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:05:55.438Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:05:56.772051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T12:05:56.772111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T12:05:56.772148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgPreVoteResp from c9867c1935b8b38d at term 2"}
	{"level":"info","ts":"2024-07-29T12:05:56.772162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.772168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgVoteResp from c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.772176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became leader at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.772183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9867c1935b8b38d elected leader c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.778754Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:05:56.778711Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c9867c1935b8b38d","local-member-attributes":"{Name:multinode-293807 ClientURLs:[https://192.168.39.26:2379]}","request-path":"/0/members/c9867c1935b8b38d/attributes","cluster-id":"8cfb77a10e566a07","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:05:56.779639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:05:56.779893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:05:56.779908Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:05:56.780539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-07-29T12:05:56.781338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:07:39 up 8 min,  0 users,  load average: 0.37, 0.23, 0.12
	Linux multinode-293807 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97] <==
	I0729 12:03:37.101897       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:03:47.108501       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:03:47.108606       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:03:47.108772       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:03:47.108889       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:03:47.108987       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:03:47.109009       1 main.go:299] handling current node
	I0729 12:03:57.109790       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:03:57.109835       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:03:57.109961       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:03:57.109984       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:03:57.110036       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:03:57.110056       1 main.go:299] handling current node
	I0729 12:04:07.109788       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:04:07.109888       1 main.go:299] handling current node
	I0729 12:04:07.109917       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:04:07.109936       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:04:07.110131       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:04:07.110178       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:04:17.108995       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:04:17.109044       1 main.go:299] handling current node
	I0729 12:04:17.109060       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:04:17.109066       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:04:17.109196       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:04:17.109219       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227] <==
	I0729 12:06:49.896565       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:06:59.897085       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:06:59.897132       1 main.go:299] handling current node
	I0729 12:06:59.897146       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:06:59.897152       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:06:59.897305       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:06:59.897331       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:07:09.897128       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:07:09.897177       1 main.go:299] handling current node
	I0729 12:07:09.897193       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:07:09.897199       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:07:09.897333       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:07:09.897340       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:07:19.897313       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:07:19.897487       1 main.go:299] handling current node
	I0729 12:07:19.897547       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:07:19.897579       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:07:19.897722       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:07:19.897744       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.2.0/24] 
	I0729 12:07:29.896523       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:07:29.896570       1 main.go:299] handling current node
	I0729 12:07:29.896584       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:07:29.896590       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:07:29.896739       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:07:29.896762       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35] <==
	I0729 12:05:58.055280       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:05:58.055328       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:05:58.055335       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:05:58.056879       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:05:58.060855       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:05:58.060895       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:05:58.060902       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:05:58.060908       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:05:58.070284       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:05:58.070358       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:05:58.070785       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0729 12:05:58.070865       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 12:05:58.079890       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 12:05:58.107613       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:05:58.107707       1 policy_source.go:224] refreshing policies
	I0729 12:05:58.107658       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:05:58.159607       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:05:58.966107       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 12:05:59.928130       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 12:06:00.050630       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 12:06:00.069814       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 12:06:00.204979       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 12:06:00.218850       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 12:06:10.798665       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 12:06:10.835036       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f] <==
	I0729 11:59:17.707391       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:59:17.720243       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:59:18.047171       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:59:18.543142       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:59:18.560493       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 11:59:18.572933       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:59:31.702668       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 11:59:31.702668       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 11:59:31.809811       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 12:00:40.217274       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:33708: use of closed network connection
	E0729 12:00:40.383994       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:33722: use of closed network connection
	E0729 12:00:40.570599       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:33740: use of closed network connection
	E0729 12:00:40.730848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:49938: use of closed network connection
	E0729 12:00:40.891578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:49950: use of closed network connection
	E0729 12:00:41.046869       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:49976: use of closed network connection
	E0729 12:00:41.317287       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50002: use of closed network connection
	E0729 12:00:41.489698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50022: use of closed network connection
	E0729 12:00:41.654062       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50038: use of closed network connection
	E0729 12:00:41.816020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50054: use of closed network connection
	I0729 12:04:19.010710       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0729 12:04:19.036896       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.036975       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.037015       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.037075       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.037129       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0] <==
	I0729 12:06:11.009723       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 12:06:11.399118       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:06:11.399202       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:06:11.460159       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:06:34.743685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.390637ms"
	I0729 12:06:34.760403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.065391ms"
	I0729 12:06:34.760565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.989µs"
	I0729 12:06:39.005006       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m02\" does not exist"
	I0729 12:06:39.022687       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m02" podCIDRs=["10.244.1.0/24"]
	I0729 12:06:40.922999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.212µs"
	I0729 12:06:40.936169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.056µs"
	I0729 12:06:40.967119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.697µs"
	I0729 12:06:40.971892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.212µs"
	I0729 12:06:40.977832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.085µs"
	I0729 12:06:41.220437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="241.891µs"
	I0729 12:06:57.835640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:06:57.860683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.281µs"
	I0729 12:06:57.874586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.406µs"
	I0729 12:07:00.271357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.047378ms"
	I0729 12:07:00.272529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.086µs"
	I0729 12:07:15.818238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:07:17.195732       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:07:17.196043       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m03\" does not exist"
	I0729 12:07:17.214082       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m03" podCIDRs=["10.244.2.0/24"]
	I0729 12:07:36.201785       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	
	
	==> kube-controller-manager [fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4] <==
	I0729 12:00:15.406998       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m02\" does not exist"
	I0729 12:00:15.419185       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m02" podCIDRs=["10.244.1.0/24"]
	I0729 12:00:16.406352       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-293807-m02"
	I0729 12:00:34.721213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:00:37.162496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.549042ms"
	I0729 12:00:37.191759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.197938ms"
	I0729 12:00:37.192003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.422µs"
	I0729 12:00:37.196715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.756µs"
	I0729 12:00:39.262883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.741459ms"
	I0729 12:00:39.262957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.253µs"
	I0729 12:00:39.758726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.22304ms"
	I0729 12:00:39.759256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.871µs"
	I0729 12:01:06.693312       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:06.693905       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m03\" does not exist"
	I0729 12:01:06.731225       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m03" podCIDRs=["10.244.2.0/24"]
	I0729 12:01:11.426631       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-293807-m03"
	I0729 12:01:24.983236       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:53.620160       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:54.585517       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m03\" does not exist"
	I0729 12:01:54.587773       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:54.594946       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m03" podCIDRs=["10.244.3.0/24"]
	I0729 12:02:13.481659       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:02:51.481576       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m03"
	I0729 12:02:51.536234       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.110857ms"
	I0729 12:02:51.536388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.587µs"
	
	
	==> kube-proxy [8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7] <==
	I0729 11:59:33.369150       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:59:33.404538       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	I0729 11:59:33.458175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:59:33.458215       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:59:33.458232       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:59:33.462236       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:59:33.462685       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:59:33.462741       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:59:33.464388       1 config.go:192] "Starting service config controller"
	I0729 11:59:33.464752       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:59:33.464821       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:59:33.464840       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:59:33.468062       1 config.go:319] "Starting node config controller"
	I0729 11:59:33.468166       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:59:33.565398       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:59:33.565465       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:59:33.568498       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a] <==
	I0729 12:05:59.271315       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:05:59.296946       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	I0729 12:05:59.343625       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:05:59.343755       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:05:59.343809       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:05:59.346530       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:05:59.347547       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:05:59.350867       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:05:59.355062       1 config.go:192] "Starting service config controller"
	I0729 12:05:59.360135       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:05:59.360162       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:05:59.358972       1 config.go:319] "Starting node config controller"
	I0729 12:05:59.360217       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:05:59.355173       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:05:59.362174       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:05:59.362182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:05:59.461251       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7] <==
	E0729 11:59:16.077393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:59:16.076197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:16.077466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:16.906369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:59:16.906405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:59:16.959735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:16.959781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:16.977951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:16.977996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:17.043374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:59:17.043445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:59:17.054071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:59:17.054121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:59:17.090994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:17.091041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:17.171973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:59:17.172016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:59:17.317569       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:59:17.317616       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:59:17.330877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:59:17.330985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:59:17.336703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:59:17.336795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0729 11:59:19.763683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:04:19.013166       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e] <==
	I0729 12:05:56.008131       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:05:58.073779       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 12:05:58.073811       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:05:58.082224       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 12:05:58.082288       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0729 12:05:58.082294       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0729 12:05:58.082321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:05:58.082907       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 12:05:58.082936       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 12:05:58.082950       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0729 12:05:58.082956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 12:05:58.182410       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0729 12:05:58.183803       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 12:05:58.183805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:05:55 multinode-293807 kubelet[3093]: W0729 12:05:55.394163    3093 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-293807&limit=500&resourceVersion=0": dial tcp 192.168.39.26:8443: connect: connection refused
	Jul 29 12:05:55 multinode-293807 kubelet[3093]: E0729 12:05:55.394233    3093 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-293807&limit=500&resourceVersion=0": dial tcp 192.168.39.26:8443: connect: connection refused
	Jul 29 12:05:55 multinode-293807 kubelet[3093]: I0729 12:05:55.918243    3093 kubelet_node_status.go:73] "Attempting to register node" node="multinode-293807"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.124723    3093 kubelet_node_status.go:112] "Node was previously registered" node="multinode-293807"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.125102    3093 kubelet_node_status.go:76] "Successfully registered node" node="multinode-293807"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.126626    3093 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.127592    3093 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.386204    3093 apiserver.go:52] "Watching apiserver"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.391281    3093 topology_manager.go:215] "Topology Admit Handler" podUID="2d51aa0e-f3ce-4f29-9f05-1953193edbe7" podNamespace="kube-system" podName="kube-proxy-5z2jx"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.391555    3093 topology_manager.go:215] "Topology Admit Handler" podUID="0b01e79a-fb4c-4177-a131-6cb670645a7c" podNamespace="kube-system" podName="kindnet-z96j2"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.391692    3093 topology_manager.go:215] "Topology Admit Handler" podUID="be897904-1343-4ad4-a2f1-8e12137637cc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w4vb7"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.391788    3093 topology_manager.go:215] "Topology Admit Handler" podUID="7d6946c3-cca0-47ca-bd10-618c715db560" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.391885    3093 topology_manager.go:215] "Topology Admit Handler" podUID="2449333d-ddfd-4a44-a8a0-0d701e603c26" podNamespace="default" podName="busybox-fc5497c4f-tzhl8"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.407079    3093 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.449305    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d6946c3-cca0-47ca-bd10-618c715db560-tmp\") pod \"storage-provisioner\" (UID: \"7d6946c3-cca0-47ca-bd10-618c715db560\") " pod="kube-system/storage-provisioner"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.449834    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d51aa0e-f3ce-4f29-9f05-1953193edbe7-lib-modules\") pod \"kube-proxy-5z2jx\" (UID: \"2d51aa0e-f3ce-4f29-9f05-1953193edbe7\") " pod="kube-system/kube-proxy-5z2jx"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.450169    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b01e79a-fb4c-4177-a131-6cb670645a7c-xtables-lock\") pod \"kindnet-z96j2\" (UID: \"0b01e79a-fb4c-4177-a131-6cb670645a7c\") " pod="kube-system/kindnet-z96j2"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.450320    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d51aa0e-f3ce-4f29-9f05-1953193edbe7-xtables-lock\") pod \"kube-proxy-5z2jx\" (UID: \"2d51aa0e-f3ce-4f29-9f05-1953193edbe7\") " pod="kube-system/kube-proxy-5z2jx"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.450775    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0b01e79a-fb4c-4177-a131-6cb670645a7c-cni-cfg\") pod \"kindnet-z96j2\" (UID: \"0b01e79a-fb4c-4177-a131-6cb670645a7c\") " pod="kube-system/kindnet-z96j2"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.451221    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b01e79a-fb4c-4177-a131-6cb670645a7c-lib-modules\") pod \"kindnet-z96j2\" (UID: \"0b01e79a-fb4c-4177-a131-6cb670645a7c\") " pod="kube-system/kindnet-z96j2"
	Jul 29 12:06:54 multinode-293807 kubelet[3093]: E0729 12:06:54.456608    3093 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:07:38.627510  155036 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19336-113730/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-293807 -n multinode-293807
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-293807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 stop
E0729 12:09:27.395931  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-293807 stop: exit status 82 (2m0.477168751s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-293807-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-293807 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-293807 status: exit status 3 (18.804452912s)

                                                
                                                
-- stdout --
	multinode-293807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-293807-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:10:02.013354  155701 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.54:22: connect: no route to host
	E0729 12:10:02.013395  155701 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.54:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-293807 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-293807 -n multinode-293807
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-293807 logs -n 25: (1.40324905s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807:/home/docker/cp-test_multinode-293807-m02_multinode-293807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807 sudo cat                                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m02_multinode-293807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03:/home/docker/cp-test_multinode-293807-m02_multinode-293807-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807-m03 sudo cat                                   | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m02_multinode-293807-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp testdata/cp-test.txt                                                | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1050760835/001/cp-test_multinode-293807-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807:/home/docker/cp-test_multinode-293807-m03_multinode-293807.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807 sudo cat                                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m03_multinode-293807.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt                       | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02:/home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807-m02 sudo cat                                   | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-293807 node stop m03                                                          | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	| node    | multinode-293807 node start                                                             | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-293807                                                                | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	| stop    | -p multinode-293807                                                                     | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	| start   | -p multinode-293807                                                                     | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:04 UTC | 29 Jul 24 12:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-293807                                                                | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC |                     |
	| node    | multinode-293807 node delete                                                            | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-293807 stop                                                                   | multinode-293807 | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:04:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:04:17.960649  153921 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:04:17.960974  153921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:04:17.960984  153921 out.go:304] Setting ErrFile to fd 2...
	I0729 12:04:17.960988  153921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:04:17.961171  153921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:04:17.961715  153921 out.go:298] Setting JSON to false
	I0729 12:04:17.962625  153921 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6409,"bootTime":1722248249,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:04:17.962686  153921 start.go:139] virtualization: kvm guest
	I0729 12:04:17.964952  153921 out.go:177] * [multinode-293807] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:04:17.966421  153921 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:04:17.966428  153921 notify.go:220] Checking for updates...
	I0729 12:04:17.968825  153921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:04:17.970225  153921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:04:17.971453  153921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:04:17.972861  153921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:04:17.974337  153921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:04:17.976412  153921 config.go:182] Loaded profile config "multinode-293807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:17.976548  153921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:04:17.977227  153921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:17.977307  153921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:17.993156  153921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42233
	I0729 12:04:17.993671  153921 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:17.994319  153921 main.go:141] libmachine: Using API Version  1
	I0729 12:04:17.994341  153921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:17.994808  153921 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:17.995069  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:04:18.032159  153921 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:04:18.033503  153921 start.go:297] selected driver: kvm2
	I0729 12:04:18.033519  153921 start.go:901] validating driver "kvm2" against &{Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:04:18.033674  153921 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:04:18.034014  153921 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:04:18.034099  153921 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:04:18.050206  153921 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:04:18.050922  153921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:04:18.050969  153921 cni.go:84] Creating CNI manager for ""
	I0729 12:04:18.050977  153921 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 12:04:18.051044  153921 start.go:340] cluster config:
	{Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-293807 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:04:18.051227  153921 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:04:18.053922  153921 out.go:177] * Starting "multinode-293807" primary control-plane node in "multinode-293807" cluster
	I0729 12:04:18.055197  153921 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:04:18.055242  153921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:04:18.055256  153921 cache.go:56] Caching tarball of preloaded images
	I0729 12:04:18.055345  153921 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:04:18.055358  153921 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:04:18.055517  153921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/config.json ...
	I0729 12:04:18.055741  153921 start.go:360] acquireMachinesLock for multinode-293807: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:04:18.055808  153921 start.go:364] duration metric: took 44.837µs to acquireMachinesLock for "multinode-293807"
	I0729 12:04:18.055828  153921 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:04:18.055837  153921 fix.go:54] fixHost starting: 
	I0729 12:04:18.056104  153921 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:04:18.056144  153921 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:04:18.071967  153921 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0729 12:04:18.072406  153921 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:04:18.072924  153921 main.go:141] libmachine: Using API Version  1
	I0729 12:04:18.072945  153921 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:04:18.073318  153921 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:04:18.073537  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:04:18.073682  153921 main.go:141] libmachine: (multinode-293807) Calling .GetState
	I0729 12:04:18.075285  153921 fix.go:112] recreateIfNeeded on multinode-293807: state=Running err=<nil>
	W0729 12:04:18.075323  153921 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:04:18.078291  153921 out.go:177] * Updating the running kvm2 "multinode-293807" VM ...
	I0729 12:04:18.079843  153921 machine.go:94] provisionDockerMachine start ...
	I0729 12:04:18.079869  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:04:18.080121  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.082597  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.083019  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.083048  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.083236  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.083416  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.083564  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.083705  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.083870  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.084106  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.084120  153921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:04:18.197259  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-293807
	
	I0729 12:04:18.197294  153921 main.go:141] libmachine: (multinode-293807) Calling .GetMachineName
	I0729 12:04:18.197558  153921 buildroot.go:166] provisioning hostname "multinode-293807"
	I0729 12:04:18.197584  153921 main.go:141] libmachine: (multinode-293807) Calling .GetMachineName
	I0729 12:04:18.197776  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.200421  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.200818  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.200846  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.200984  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.201183  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.201338  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.201455  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.201601  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.201757  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.201769  153921 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-293807 && echo "multinode-293807" | sudo tee /etc/hostname
	I0729 12:04:18.327942  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-293807
	
	I0729 12:04:18.327987  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.330830  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.331205  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.331243  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.331407  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.331620  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.331798  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.331928  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.332082  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.332262  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.332280  153921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-293807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-293807/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-293807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:04:18.449962  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:04:18.450000  153921 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 12:04:18.450043  153921 buildroot.go:174] setting up certificates
	I0729 12:04:18.450056  153921 provision.go:84] configureAuth start
	I0729 12:04:18.450072  153921 main.go:141] libmachine: (multinode-293807) Calling .GetMachineName
	I0729 12:04:18.450362  153921 main.go:141] libmachine: (multinode-293807) Calling .GetIP
	I0729 12:04:18.452937  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.453320  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.453357  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.453535  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.455599  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.455938  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.455966  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.456100  153921 provision.go:143] copyHostCerts
	I0729 12:04:18.456134  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:04:18.456170  153921 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 12:04:18.456179  153921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:04:18.456244  153921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 12:04:18.456337  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:04:18.456355  153921 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 12:04:18.456359  153921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:04:18.456382  153921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 12:04:18.456435  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:04:18.456451  153921 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 12:04:18.456457  153921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:04:18.456476  153921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 12:04:18.456530  153921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.multinode-293807 san=[127.0.0.1 192.168.39.26 localhost minikube multinode-293807]
	I0729 12:04:18.709672  153921 provision.go:177] copyRemoteCerts
	I0729 12:04:18.709745  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:04:18.709771  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.712443  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.712908  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.712943  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.713156  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.713383  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.713584  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.713719  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:04:18.808642  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 12:04:18.808727  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:04:18.834262  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 12:04:18.834348  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 12:04:18.858180  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 12:04:18.858255  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:04:18.883028  153921 provision.go:87] duration metric: took 432.954887ms to configureAuth
	I0729 12:04:18.883060  153921 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:04:18.883283  153921 config.go:182] Loaded profile config "multinode-293807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:04:18.883370  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:04:18.886200  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.886585  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:04:18.886607  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:04:18.886824  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:04:18.887038  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.887205  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:04:18.887320  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:04:18.887474  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:04:18.887662  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:04:18.887683  153921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:05:49.765830  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:05:49.765873  153921 machine.go:97] duration metric: took 1m31.68601228s to provisionDockerMachine
	I0729 12:05:49.765887  153921 start.go:293] postStartSetup for "multinode-293807" (driver="kvm2")
	I0729 12:05:49.765899  153921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:05:49.765926  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:49.766248  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:05:49.766282  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:49.769552  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.769968  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:49.770010  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.770171  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:49.770398  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:49.770569  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:49.770677  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:05:49.855885  153921 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:05:49.860097  153921 command_runner.go:130] > NAME=Buildroot
	I0729 12:05:49.860120  153921 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 12:05:49.860126  153921 command_runner.go:130] > ID=buildroot
	I0729 12:05:49.860133  153921 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 12:05:49.860140  153921 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 12:05:49.860191  153921 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:05:49.860209  153921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 12:05:49.860283  153921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 12:05:49.860352  153921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 12:05:49.860362  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /etc/ssl/certs/1209632.pem
	I0729 12:05:49.860454  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:05:49.870135  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:05:49.894511  153921 start.go:296] duration metric: took 128.605695ms for postStartSetup
	I0729 12:05:49.894560  153921 fix.go:56] duration metric: took 1m31.838725321s for fixHost
	I0729 12:05:49.894582  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:49.897333  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.897761  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:49.897798  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:49.897934  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:49.898169  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:49.898310  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:49.898441  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:49.898597  153921 main.go:141] libmachine: Using SSH client type: native
	I0729 12:05:49.898833  153921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0729 12:05:49.898848  153921 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:05:50.013753  153921 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722254749.986012519
	
	I0729 12:05:50.013782  153921 fix.go:216] guest clock: 1722254749.986012519
	I0729 12:05:50.013801  153921 fix.go:229] Guest: 2024-07-29 12:05:49.986012519 +0000 UTC Remote: 2024-07-29 12:05:49.894564673 +0000 UTC m=+91.974195898 (delta=91.447846ms)
	I0729 12:05:50.013832  153921 fix.go:200] guest clock delta is within tolerance: 91.447846ms
	I0729 12:05:50.013841  153921 start.go:83] releasing machines lock for "multinode-293807", held for 1m31.958021336s
	I0729 12:05:50.013868  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.014199  153921 main.go:141] libmachine: (multinode-293807) Calling .GetIP
	I0729 12:05:50.016936  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.017307  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:50.017424  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.017518  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.018043  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.018216  153921 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:05:50.018287  153921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:05:50.018350  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:50.018454  153921 ssh_runner.go:195] Run: cat /version.json
	I0729 12:05:50.018481  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:05:50.021266  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021294  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021696  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:50.021724  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021751  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:50.021763  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:50.021892  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:50.022001  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:05:50.022090  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:50.022142  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:05:50.022200  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:50.022241  153921 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:05:50.022300  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:05:50.022348  153921 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:05:50.101398  153921 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 12:05:50.101693  153921 ssh_runner.go:195] Run: systemctl --version
	I0729 12:05:50.123455  153921 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 12:05:50.123521  153921 command_runner.go:130] > systemd 252 (252)
	I0729 12:05:50.123549  153921 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 12:05:50.123623  153921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:05:50.278878  153921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 12:05:50.284438  153921 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 12:05:50.284528  153921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:05:50.284592  153921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:05:50.294044  153921 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:05:50.294072  153921 start.go:495] detecting cgroup driver to use...
	I0729 12:05:50.294151  153921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:05:50.310921  153921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:05:50.325490  153921 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:05:50.325573  153921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:05:50.339841  153921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:05:50.353862  153921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:05:50.496896  153921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:05:50.645673  153921 docker.go:233] disabling docker service ...
	I0729 12:05:50.645743  153921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:05:50.662403  153921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:05:50.675964  153921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:05:50.820424  153921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:05:50.966315  153921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:05:50.980932  153921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:05:50.999447  153921 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 12:05:50.999709  153921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:05:50.999778  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.010609  153921 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:05:51.010680  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.021508  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.032512  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.043455  153921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:05:51.054872  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.065635  153921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.077007  153921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:05:51.087502  153921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:05:51.097004  153921 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 12:05:51.097103  153921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:05:51.107066  153921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:05:51.244296  153921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:05:51.781015  153921 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:05:51.781107  153921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:05:51.785682  153921 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 12:05:51.785708  153921 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 12:05:51.785715  153921 command_runner.go:130] > Device: 0,22	Inode: 1317        Links: 1
	I0729 12:05:51.785721  153921 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 12:05:51.785726  153921 command_runner.go:130] > Access: 2024-07-29 12:05:51.648880032 +0000
	I0729 12:05:51.785732  153921 command_runner.go:130] > Modify: 2024-07-29 12:05:51.648880032 +0000
	I0729 12:05:51.785738  153921 command_runner.go:130] > Change: 2024-07-29 12:05:51.648880032 +0000
	I0729 12:05:51.785743  153921 command_runner.go:130] >  Birth: -
	I0729 12:05:51.785768  153921 start.go:563] Will wait 60s for crictl version
	I0729 12:05:51.785836  153921 ssh_runner.go:195] Run: which crictl
	I0729 12:05:51.789472  153921 command_runner.go:130] > /usr/bin/crictl
	I0729 12:05:51.789585  153921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:05:51.823263  153921 command_runner.go:130] > Version:  0.1.0
	I0729 12:05:51.823291  153921 command_runner.go:130] > RuntimeName:  cri-o
	I0729 12:05:51.823296  153921 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 12:05:51.823302  153921 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 12:05:51.824201  153921 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:05:51.824274  153921 ssh_runner.go:195] Run: crio --version
	I0729 12:05:51.851865  153921 command_runner.go:130] > crio version 1.29.1
	I0729 12:05:51.851888  153921 command_runner.go:130] > Version:        1.29.1
	I0729 12:05:51.851894  153921 command_runner.go:130] > GitCommit:      unknown
	I0729 12:05:51.851906  153921 command_runner.go:130] > GitCommitDate:  unknown
	I0729 12:05:51.851913  153921 command_runner.go:130] > GitTreeState:   clean
	I0729 12:05:51.851922  153921 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 12:05:51.851929  153921 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 12:05:51.851935  153921 command_runner.go:130] > Compiler:       gc
	I0729 12:05:51.851940  153921 command_runner.go:130] > Platform:       linux/amd64
	I0729 12:05:51.851945  153921 command_runner.go:130] > Linkmode:       dynamic
	I0729 12:05:51.851949  153921 command_runner.go:130] > BuildTags:      
	I0729 12:05:51.851957  153921 command_runner.go:130] >   containers_image_ostree_stub
	I0729 12:05:51.851961  153921 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 12:05:51.851965  153921 command_runner.go:130] >   btrfs_noversion
	I0729 12:05:51.851970  153921 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 12:05:51.851976  153921 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 12:05:51.851981  153921 command_runner.go:130] >   seccomp
	I0729 12:05:51.851989  153921 command_runner.go:130] > LDFlags:          unknown
	I0729 12:05:51.851998  153921 command_runner.go:130] > SeccompEnabled:   true
	I0729 12:05:51.852008  153921 command_runner.go:130] > AppArmorEnabled:  false
	I0729 12:05:51.853277  153921 ssh_runner.go:195] Run: crio --version
	I0729 12:05:51.881454  153921 command_runner.go:130] > crio version 1.29.1
	I0729 12:05:51.881482  153921 command_runner.go:130] > Version:        1.29.1
	I0729 12:05:51.881491  153921 command_runner.go:130] > GitCommit:      unknown
	I0729 12:05:51.881496  153921 command_runner.go:130] > GitCommitDate:  unknown
	I0729 12:05:51.881501  153921 command_runner.go:130] > GitTreeState:   clean
	I0729 12:05:51.881507  153921 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 12:05:51.881520  153921 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 12:05:51.881524  153921 command_runner.go:130] > Compiler:       gc
	I0729 12:05:51.881528  153921 command_runner.go:130] > Platform:       linux/amd64
	I0729 12:05:51.881532  153921 command_runner.go:130] > Linkmode:       dynamic
	I0729 12:05:51.881537  153921 command_runner.go:130] > BuildTags:      
	I0729 12:05:51.881542  153921 command_runner.go:130] >   containers_image_ostree_stub
	I0729 12:05:51.881547  153921 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 12:05:51.881554  153921 command_runner.go:130] >   btrfs_noversion
	I0729 12:05:51.881561  153921 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 12:05:51.881569  153921 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 12:05:51.881579  153921 command_runner.go:130] >   seccomp
	I0729 12:05:51.881586  153921 command_runner.go:130] > LDFlags:          unknown
	I0729 12:05:51.881592  153921 command_runner.go:130] > SeccompEnabled:   true
	I0729 12:05:51.881598  153921 command_runner.go:130] > AppArmorEnabled:  false
	I0729 12:05:51.883736  153921 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:05:51.885218  153921 main.go:141] libmachine: (multinode-293807) Calling .GetIP
	I0729 12:05:51.887889  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:51.888257  153921 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:05:51.888290  153921 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:05:51.888487  153921 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:05:51.892592  153921 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 12:05:51.892836  153921 kubeadm.go:883] updating cluster {Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:05:51.893024  153921 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:05:51.893076  153921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:05:51.932948  153921 command_runner.go:130] > {
	I0729 12:05:51.932997  153921 command_runner.go:130] >   "images": [
	I0729 12:05:51.933003  153921 command_runner.go:130] >     {
	I0729 12:05:51.933017  153921 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 12:05:51.933024  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933034  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 12:05:51.933040  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933047  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933060  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 12:05:51.933075  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 12:05:51.933089  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933096  153921 command_runner.go:130] >       "size": "87165492",
	I0729 12:05:51.933106  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933113  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933125  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933133  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933139  153921 command_runner.go:130] >     },
	I0729 12:05:51.933146  153921 command_runner.go:130] >     {
	I0729 12:05:51.933155  153921 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 12:05:51.933161  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933171  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 12:05:51.933178  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933185  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933196  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 12:05:51.933210  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 12:05:51.933218  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933226  153921 command_runner.go:130] >       "size": "87174707",
	I0729 12:05:51.933235  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933244  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933252  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933257  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933263  153921 command_runner.go:130] >     },
	I0729 12:05:51.933271  153921 command_runner.go:130] >     {
	I0729 12:05:51.933280  153921 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 12:05:51.933288  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933295  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 12:05:51.933302  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933308  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933321  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 12:05:51.933333  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 12:05:51.933341  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933347  153921 command_runner.go:130] >       "size": "1363676",
	I0729 12:05:51.933355  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933361  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933366  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933374  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933384  153921 command_runner.go:130] >     },
	I0729 12:05:51.933391  153921 command_runner.go:130] >     {
	I0729 12:05:51.933400  153921 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 12:05:51.933409  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933417  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 12:05:51.933425  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933431  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933445  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 12:05:51.933463  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 12:05:51.933470  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933476  153921 command_runner.go:130] >       "size": "31470524",
	I0729 12:05:51.933484  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933491  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933500  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933506  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933513  153921 command_runner.go:130] >     },
	I0729 12:05:51.933519  153921 command_runner.go:130] >     {
	I0729 12:05:51.933530  153921 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 12:05:51.933539  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933548  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 12:05:51.933555  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933562  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933573  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 12:05:51.933588  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 12:05:51.933597  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933603  153921 command_runner.go:130] >       "size": "61245718",
	I0729 12:05:51.933612  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.933619  153921 command_runner.go:130] >       "username": "nonroot",
	I0729 12:05:51.933628  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933633  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933642  153921 command_runner.go:130] >     },
	I0729 12:05:51.933647  153921 command_runner.go:130] >     {
	I0729 12:05:51.933661  153921 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 12:05:51.933667  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933686  153921 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 12:05:51.933694  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933702  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933714  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 12:05:51.933727  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 12:05:51.933736  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933746  153921 command_runner.go:130] >       "size": "150779692",
	I0729 12:05:51.933753  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.933761  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.933766  153921 command_runner.go:130] >       },
	I0729 12:05:51.933772  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933778  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933783  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933788  153921 command_runner.go:130] >     },
	I0729 12:05:51.933801  153921 command_runner.go:130] >     {
	I0729 12:05:51.933811  153921 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 12:05:51.933820  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933827  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 12:05:51.933836  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933842  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.933853  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 12:05:51.933866  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 12:05:51.933880  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933886  153921 command_runner.go:130] >       "size": "117609954",
	I0729 12:05:51.933896  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.933902  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.933910  153921 command_runner.go:130] >       },
	I0729 12:05:51.933916  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.933927  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.933935  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.933940  153921 command_runner.go:130] >     },
	I0729 12:05:51.933947  153921 command_runner.go:130] >     {
	I0729 12:05:51.933955  153921 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 12:05:51.933964  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.933972  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 12:05:51.933980  153921 command_runner.go:130] >       ],
	I0729 12:05:51.933986  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934010  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 12:05:51.934025  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 12:05:51.934030  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934036  153921 command_runner.go:130] >       "size": "112198984",
	I0729 12:05:51.934045  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.934050  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.934059  153921 command_runner.go:130] >       },
	I0729 12:05:51.934065  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934071  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934076  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.934081  153921 command_runner.go:130] >     },
	I0729 12:05:51.934085  153921 command_runner.go:130] >     {
	I0729 12:05:51.934094  153921 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 12:05:51.934099  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.934107  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 12:05:51.934112  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934117  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934128  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 12:05:51.934139  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 12:05:51.934150  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934155  153921 command_runner.go:130] >       "size": "85953945",
	I0729 12:05:51.934163  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.934169  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934175  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934182  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.934191  153921 command_runner.go:130] >     },
	I0729 12:05:51.934196  153921 command_runner.go:130] >     {
	I0729 12:05:51.934207  153921 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 12:05:51.934215  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.934226  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 12:05:51.934235  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934241  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934254  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 12:05:51.934266  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 12:05:51.934274  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934281  153921 command_runner.go:130] >       "size": "63051080",
	I0729 12:05:51.934289  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.934299  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.934306  153921 command_runner.go:130] >       },
	I0729 12:05:51.934312  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934321  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934327  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.934335  153921 command_runner.go:130] >     },
	I0729 12:05:51.934339  153921 command_runner.go:130] >     {
	I0729 12:05:51.934351  153921 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 12:05:51.934357  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.934365  153921 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 12:05:51.934371  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934377  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.934389  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 12:05:51.934402  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 12:05:51.934411  153921 command_runner.go:130] >       ],
	I0729 12:05:51.934418  153921 command_runner.go:130] >       "size": "750414",
	I0729 12:05:51.934426  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.934436  153921 command_runner.go:130] >         "value": "65535"
	I0729 12:05:51.934441  153921 command_runner.go:130] >       },
	I0729 12:05:51.934450  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.934456  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.934465  153921 command_runner.go:130] >       "pinned": true
	I0729 12:05:51.934470  153921 command_runner.go:130] >     }
	I0729 12:05:51.934477  153921 command_runner.go:130] >   ]
	I0729 12:05:51.934482  153921 command_runner.go:130] > }
	I0729 12:05:51.934714  153921 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:05:51.934733  153921 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:05:51.934788  153921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:05:51.964809  153921 command_runner.go:130] > {
	I0729 12:05:51.964835  153921 command_runner.go:130] >   "images": [
	I0729 12:05:51.964839  153921 command_runner.go:130] >     {
	I0729 12:05:51.964849  153921 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 12:05:51.964854  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.964860  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 12:05:51.964864  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964868  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.964877  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 12:05:51.964883  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 12:05:51.964886  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964891  153921 command_runner.go:130] >       "size": "87165492",
	I0729 12:05:51.964895  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.964900  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.964908  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.964913  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.964916  153921 command_runner.go:130] >     },
	I0729 12:05:51.964919  153921 command_runner.go:130] >     {
	I0729 12:05:51.964926  153921 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 12:05:51.964935  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.964939  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 12:05:51.964943  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964947  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.964955  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 12:05:51.964971  153921 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 12:05:51.964977  153921 command_runner.go:130] >       ],
	I0729 12:05:51.964982  153921 command_runner.go:130] >       "size": "87174707",
	I0729 12:05:51.964992  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.964998  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965002  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965006  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965009  153921 command_runner.go:130] >     },
	I0729 12:05:51.965013  153921 command_runner.go:130] >     {
	I0729 12:05:51.965019  153921 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 12:05:51.965025  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965031  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 12:05:51.965033  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965038  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965046  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 12:05:51.965053  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 12:05:51.965058  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965062  153921 command_runner.go:130] >       "size": "1363676",
	I0729 12:05:51.965066  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965070  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965075  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965081  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965087  153921 command_runner.go:130] >     },
	I0729 12:05:51.965091  153921 command_runner.go:130] >     {
	I0729 12:05:51.965097  153921 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 12:05:51.965103  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965108  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 12:05:51.965115  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965118  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965125  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 12:05:51.965138  153921 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 12:05:51.965144  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965148  153921 command_runner.go:130] >       "size": "31470524",
	I0729 12:05:51.965151  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965155  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965160  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965166  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965169  153921 command_runner.go:130] >     },
	I0729 12:05:51.965175  153921 command_runner.go:130] >     {
	I0729 12:05:51.965181  153921 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 12:05:51.965187  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965192  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 12:05:51.965198  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965202  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965209  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 12:05:51.965218  153921 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 12:05:51.965222  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965228  153921 command_runner.go:130] >       "size": "61245718",
	I0729 12:05:51.965232  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965236  153921 command_runner.go:130] >       "username": "nonroot",
	I0729 12:05:51.965240  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965243  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965247  153921 command_runner.go:130] >     },
	I0729 12:05:51.965250  153921 command_runner.go:130] >     {
	I0729 12:05:51.965256  153921 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 12:05:51.965262  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965266  153921 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 12:05:51.965272  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965275  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965283  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 12:05:51.965292  153921 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 12:05:51.965297  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965303  153921 command_runner.go:130] >       "size": "150779692",
	I0729 12:05:51.965308  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965312  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965318  153921 command_runner.go:130] >       },
	I0729 12:05:51.965322  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965327  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965332  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965337  153921 command_runner.go:130] >     },
	I0729 12:05:51.965341  153921 command_runner.go:130] >     {
	I0729 12:05:51.965347  153921 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 12:05:51.965353  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965358  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 12:05:51.965363  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965368  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965378  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 12:05:51.965387  153921 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 12:05:51.965391  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965395  153921 command_runner.go:130] >       "size": "117609954",
	I0729 12:05:51.965401  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965405  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965409  153921 command_runner.go:130] >       },
	I0729 12:05:51.965413  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965417  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965423  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965426  153921 command_runner.go:130] >     },
	I0729 12:05:51.965432  153921 command_runner.go:130] >     {
	I0729 12:05:51.965437  153921 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 12:05:51.965441  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965448  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 12:05:51.965451  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965456  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965476  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 12:05:51.965486  153921 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 12:05:51.965489  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965493  153921 command_runner.go:130] >       "size": "112198984",
	I0729 12:05:51.965502  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965506  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965510  153921 command_runner.go:130] >       },
	I0729 12:05:51.965514  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965518  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965522  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965525  153921 command_runner.go:130] >     },
	I0729 12:05:51.965528  153921 command_runner.go:130] >     {
	I0729 12:05:51.965534  153921 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 12:05:51.965539  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965544  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 12:05:51.965549  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965553  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965560  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 12:05:51.965569  153921 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 12:05:51.965573  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965577  153921 command_runner.go:130] >       "size": "85953945",
	I0729 12:05:51.965581  153921 command_runner.go:130] >       "uid": null,
	I0729 12:05:51.965587  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965592  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965596  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965601  153921 command_runner.go:130] >     },
	I0729 12:05:51.965604  153921 command_runner.go:130] >     {
	I0729 12:05:51.965610  153921 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 12:05:51.965616  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965621  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 12:05:51.965626  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965630  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965637  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 12:05:51.965646  153921 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 12:05:51.965649  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965653  153921 command_runner.go:130] >       "size": "63051080",
	I0729 12:05:51.965657  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965661  153921 command_runner.go:130] >         "value": "0"
	I0729 12:05:51.965664  153921 command_runner.go:130] >       },
	I0729 12:05:51.965671  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965675  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965680  153921 command_runner.go:130] >       "pinned": false
	I0729 12:05:51.965684  153921 command_runner.go:130] >     },
	I0729 12:05:51.965689  153921 command_runner.go:130] >     {
	I0729 12:05:51.965695  153921 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 12:05:51.965701  153921 command_runner.go:130] >       "repoTags": [
	I0729 12:05:51.965705  153921 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 12:05:51.965710  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965714  153921 command_runner.go:130] >       "repoDigests": [
	I0729 12:05:51.965722  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 12:05:51.965731  153921 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 12:05:51.965735  153921 command_runner.go:130] >       ],
	I0729 12:05:51.965740  153921 command_runner.go:130] >       "size": "750414",
	I0729 12:05:51.965744  153921 command_runner.go:130] >       "uid": {
	I0729 12:05:51.965747  153921 command_runner.go:130] >         "value": "65535"
	I0729 12:05:51.965753  153921 command_runner.go:130] >       },
	I0729 12:05:51.965757  153921 command_runner.go:130] >       "username": "",
	I0729 12:05:51.965764  153921 command_runner.go:130] >       "spec": null,
	I0729 12:05:51.965768  153921 command_runner.go:130] >       "pinned": true
	I0729 12:05:51.965771  153921 command_runner.go:130] >     }
	I0729 12:05:51.965775  153921 command_runner.go:130] >   ]
	I0729 12:05:51.965778  153921 command_runner.go:130] > }
	I0729 12:05:51.966334  153921 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:05:51.966354  153921 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:05:51.966364  153921 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.30.3 crio true true} ...
	I0729 12:05:51.966460  153921 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-293807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:05:51.966529  153921 ssh_runner.go:195] Run: crio config
	I0729 12:05:52.008730  153921 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 12:05:52.008766  153921 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 12:05:52.008775  153921 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 12:05:52.008778  153921 command_runner.go:130] > #
	I0729 12:05:52.008797  153921 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 12:05:52.008806  153921 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 12:05:52.008815  153921 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 12:05:52.008828  153921 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 12:05:52.008834  153921 command_runner.go:130] > # reload'.
	I0729 12:05:52.008844  153921 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 12:05:52.008855  153921 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 12:05:52.008861  153921 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 12:05:52.008867  153921 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 12:05:52.008871  153921 command_runner.go:130] > [crio]
	I0729 12:05:52.008877  153921 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 12:05:52.008882  153921 command_runner.go:130] > # containers images, in this directory.
	I0729 12:05:52.008890  153921 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 12:05:52.008902  153921 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 12:05:52.008914  153921 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 12:05:52.008927  153921 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 12:05:52.008937  153921 command_runner.go:130] > # imagestore = ""
	I0729 12:05:52.008947  153921 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 12:05:52.008956  153921 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 12:05:52.008969  153921 command_runner.go:130] > storage_driver = "overlay"
	I0729 12:05:52.008975  153921 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 12:05:52.008982  153921 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 12:05:52.008985  153921 command_runner.go:130] > storage_option = [
	I0729 12:05:52.008990  153921 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 12:05:52.008995  153921 command_runner.go:130] > ]
	I0729 12:05:52.009001  153921 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 12:05:52.009007  153921 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 12:05:52.009015  153921 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 12:05:52.009021  153921 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 12:05:52.009039  153921 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 12:05:52.009046  153921 command_runner.go:130] > # always happen on a node reboot
	I0729 12:05:52.009056  153921 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 12:05:52.009076  153921 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 12:05:52.009088  153921 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 12:05:52.009103  153921 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 12:05:52.009114  153921 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 12:05:52.009129  153921 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 12:05:52.009142  153921 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 12:05:52.009148  153921 command_runner.go:130] > # internal_wipe = true
	I0729 12:05:52.009155  153921 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 12:05:52.009160  153921 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 12:05:52.009164  153921 command_runner.go:130] > # internal_repair = false
	I0729 12:05:52.009170  153921 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 12:05:52.009178  153921 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 12:05:52.009186  153921 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 12:05:52.009191  153921 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 12:05:52.009199  153921 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 12:05:52.009202  153921 command_runner.go:130] > [crio.api]
	I0729 12:05:52.009210  153921 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 12:05:52.009214  153921 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 12:05:52.009223  153921 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 12:05:52.009229  153921 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 12:05:52.009239  153921 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 12:05:52.009245  153921 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 12:05:52.009251  153921 command_runner.go:130] > # stream_port = "0"
	I0729 12:05:52.009259  153921 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 12:05:52.009267  153921 command_runner.go:130] > # stream_enable_tls = false
	I0729 12:05:52.009276  153921 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 12:05:52.009282  153921 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 12:05:52.009296  153921 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 12:05:52.009309  153921 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 12:05:52.009318  153921 command_runner.go:130] > # minutes.
	I0729 12:05:52.009325  153921 command_runner.go:130] > # stream_tls_cert = ""
	I0729 12:05:52.009339  153921 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 12:05:52.009352  153921 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 12:05:52.009360  153921 command_runner.go:130] > # stream_tls_key = ""
	I0729 12:05:52.009367  153921 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 12:05:52.009379  153921 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 12:05:52.009399  153921 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 12:05:52.009409  153921 command_runner.go:130] > # stream_tls_ca = ""
	I0729 12:05:52.009420  153921 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 12:05:52.009431  153921 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 12:05:52.009445  153921 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 12:05:52.009456  153921 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 12:05:52.009469  153921 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 12:05:52.009480  153921 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 12:05:52.009490  153921 command_runner.go:130] > [crio.runtime]
	I0729 12:05:52.009500  153921 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 12:05:52.009512  153921 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 12:05:52.009521  153921 command_runner.go:130] > # "nofile=1024:2048"
	I0729 12:05:52.009530  153921 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 12:05:52.009540  153921 command_runner.go:130] > # default_ulimits = [
	I0729 12:05:52.009545  153921 command_runner.go:130] > # ]
	I0729 12:05:52.009558  153921 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 12:05:52.009570  153921 command_runner.go:130] > # no_pivot = false
	I0729 12:05:52.009582  153921 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 12:05:52.009593  153921 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 12:05:52.009604  153921 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 12:05:52.009617  153921 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 12:05:52.009627  153921 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 12:05:52.009647  153921 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 12:05:52.009657  153921 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 12:05:52.009664  153921 command_runner.go:130] > # Cgroup setting for conmon
	I0729 12:05:52.009677  153921 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 12:05:52.009687  153921 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 12:05:52.009697  153921 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 12:05:52.009708  153921 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 12:05:52.009720  153921 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 12:05:52.009728  153921 command_runner.go:130] > conmon_env = [
	I0729 12:05:52.009738  153921 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 12:05:52.009747  153921 command_runner.go:130] > ]
	I0729 12:05:52.009755  153921 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 12:05:52.009767  153921 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 12:05:52.009785  153921 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 12:05:52.009795  153921 command_runner.go:130] > # default_env = [
	I0729 12:05:52.009801  153921 command_runner.go:130] > # ]
	I0729 12:05:52.009813  153921 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 12:05:52.009828  153921 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 12:05:52.009837  153921 command_runner.go:130] > # selinux = false
	I0729 12:05:52.009847  153921 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 12:05:52.009857  153921 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 12:05:52.009864  153921 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 12:05:52.009868  153921 command_runner.go:130] > # seccomp_profile = ""
	I0729 12:05:52.009876  153921 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 12:05:52.009881  153921 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 12:05:52.009889  153921 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 12:05:52.009894  153921 command_runner.go:130] > # which might increase security.
	I0729 12:05:52.009898  153921 command_runner.go:130] > # This option is currently deprecated,
	I0729 12:05:52.009909  153921 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 12:05:52.009919  153921 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 12:05:52.009930  153921 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 12:05:52.009944  153921 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 12:05:52.009956  153921 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 12:05:52.009969  153921 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 12:05:52.009979  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.009990  153921 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 12:05:52.010002  153921 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 12:05:52.010013  153921 command_runner.go:130] > # the cgroup blockio controller.
	I0729 12:05:52.010023  153921 command_runner.go:130] > # blockio_config_file = ""
	I0729 12:05:52.010033  153921 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 12:05:52.010042  153921 command_runner.go:130] > # blockio parameters.
	I0729 12:05:52.010048  153921 command_runner.go:130] > # blockio_reload = false
	I0729 12:05:52.010058  153921 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 12:05:52.010062  153921 command_runner.go:130] > # irqbalance daemon.
	I0729 12:05:52.010069  153921 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 12:05:52.010077  153921 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 12:05:52.010087  153921 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 12:05:52.010100  153921 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 12:05:52.010112  153921 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 12:05:52.010124  153921 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 12:05:52.010135  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.010144  153921 command_runner.go:130] > # rdt_config_file = ""
	I0729 12:05:52.010156  153921 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 12:05:52.010166  153921 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 12:05:52.010196  153921 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 12:05:52.010207  153921 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 12:05:52.010216  153921 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 12:05:52.010230  153921 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 12:05:52.010239  153921 command_runner.go:130] > # will be added.
	I0729 12:05:52.010246  153921 command_runner.go:130] > # default_capabilities = [
	I0729 12:05:52.010254  153921 command_runner.go:130] > # 	"CHOWN",
	I0729 12:05:52.010261  153921 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 12:05:52.010270  153921 command_runner.go:130] > # 	"FSETID",
	I0729 12:05:52.010278  153921 command_runner.go:130] > # 	"FOWNER",
	I0729 12:05:52.010287  153921 command_runner.go:130] > # 	"SETGID",
	I0729 12:05:52.010300  153921 command_runner.go:130] > # 	"SETUID",
	I0729 12:05:52.010306  153921 command_runner.go:130] > # 	"SETPCAP",
	I0729 12:05:52.010315  153921 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 12:05:52.010322  153921 command_runner.go:130] > # 	"KILL",
	I0729 12:05:52.010329  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010340  153921 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 12:05:52.010355  153921 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 12:05:52.010368  153921 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 12:05:52.010381  153921 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 12:05:52.010393  153921 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 12:05:52.010403  153921 command_runner.go:130] > default_sysctls = [
	I0729 12:05:52.010410  153921 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 12:05:52.010419  153921 command_runner.go:130] > ]
	I0729 12:05:52.010427  153921 command_runner.go:130] > # List of devices on the host that a
	I0729 12:05:52.010439  153921 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 12:05:52.010448  153921 command_runner.go:130] > # allowed_devices = [
	I0729 12:05:52.010456  153921 command_runner.go:130] > # 	"/dev/fuse",
	I0729 12:05:52.010465  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010473  153921 command_runner.go:130] > # List of additional devices. specified as
	I0729 12:05:52.010487  153921 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 12:05:52.010498  153921 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 12:05:52.010507  153921 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 12:05:52.010518  153921 command_runner.go:130] > # additional_devices = [
	I0729 12:05:52.010524  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010535  153921 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 12:05:52.010543  153921 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 12:05:52.010550  153921 command_runner.go:130] > # 	"/etc/cdi",
	I0729 12:05:52.010559  153921 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 12:05:52.010568  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010577  153921 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 12:05:52.010589  153921 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 12:05:52.010599  153921 command_runner.go:130] > # Defaults to false.
	I0729 12:05:52.010608  153921 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 12:05:52.010621  153921 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 12:05:52.010634  153921 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 12:05:52.010643  153921 command_runner.go:130] > # hooks_dir = [
	I0729 12:05:52.010651  153921 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 12:05:52.010659  153921 command_runner.go:130] > # ]
	I0729 12:05:52.010669  153921 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 12:05:52.010682  153921 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 12:05:52.010693  153921 command_runner.go:130] > # its default mounts from the following two files:
	I0729 12:05:52.010701  153921 command_runner.go:130] > #
	I0729 12:05:52.010710  153921 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 12:05:52.010725  153921 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 12:05:52.010736  153921 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 12:05:52.010743  153921 command_runner.go:130] > #
	I0729 12:05:52.010753  153921 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 12:05:52.010767  153921 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 12:05:52.010780  153921 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 12:05:52.010794  153921 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 12:05:52.010799  153921 command_runner.go:130] > #
	I0729 12:05:52.010810  153921 command_runner.go:130] > # default_mounts_file = ""
	I0729 12:05:52.010820  153921 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 12:05:52.010833  153921 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 12:05:52.010843  153921 command_runner.go:130] > pids_limit = 1024
	I0729 12:05:52.010853  153921 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 12:05:52.010865  153921 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 12:05:52.010877  153921 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 12:05:52.010887  153921 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 12:05:52.010891  153921 command_runner.go:130] > # log_size_max = -1
	I0729 12:05:52.010897  153921 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 12:05:52.010904  153921 command_runner.go:130] > # log_to_journald = false
	I0729 12:05:52.010909  153921 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 12:05:52.010916  153921 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 12:05:52.010921  153921 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 12:05:52.010932  153921 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 12:05:52.010941  153921 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 12:05:52.010951  153921 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 12:05:52.010961  153921 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 12:05:52.010971  153921 command_runner.go:130] > # read_only = false
	I0729 12:05:52.010980  153921 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 12:05:52.010994  153921 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 12:05:52.011004  153921 command_runner.go:130] > # live configuration reload.
	I0729 12:05:52.011011  153921 command_runner.go:130] > # log_level = "info"
	I0729 12:05:52.011022  153921 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 12:05:52.011033  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.011040  153921 command_runner.go:130] > # log_filter = ""
	I0729 12:05:52.011053  153921 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 12:05:52.011065  153921 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 12:05:52.011072  153921 command_runner.go:130] > # separated by comma.
	I0729 12:05:52.011088  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011098  153921 command_runner.go:130] > # uid_mappings = ""
	I0729 12:05:52.011111  153921 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 12:05:52.011125  153921 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 12:05:52.011134  153921 command_runner.go:130] > # separated by comma.
	I0729 12:05:52.011146  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011156  153921 command_runner.go:130] > # gid_mappings = ""
	I0729 12:05:52.011166  153921 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 12:05:52.011180  153921 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 12:05:52.011194  153921 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 12:05:52.011205  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011224  153921 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 12:05:52.011238  153921 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 12:05:52.011251  153921 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 12:05:52.011262  153921 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 12:05:52.011277  153921 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 12:05:52.011286  153921 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 12:05:52.011297  153921 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 12:05:52.011310  153921 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 12:05:52.011322  153921 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 12:05:52.011332  153921 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 12:05:52.011341  153921 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 12:05:52.011354  153921 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 12:05:52.011365  153921 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 12:05:52.011376  153921 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 12:05:52.011384  153921 command_runner.go:130] > drop_infra_ctr = false
	I0729 12:05:52.011395  153921 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 12:05:52.011407  153921 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 12:05:52.011420  153921 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 12:05:52.011429  153921 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 12:05:52.011441  153921 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 12:05:52.011454  153921 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 12:05:52.011464  153921 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 12:05:52.011475  153921 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 12:05:52.011482  153921 command_runner.go:130] > # shared_cpuset = ""
	I0729 12:05:52.011493  153921 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 12:05:52.011503  153921 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 12:05:52.011508  153921 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 12:05:52.011518  153921 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 12:05:52.011522  153921 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 12:05:52.011530  153921 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 12:05:52.011538  153921 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 12:05:52.011543  153921 command_runner.go:130] > # enable_criu_support = false
	I0729 12:05:52.011550  153921 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 12:05:52.011557  153921 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 12:05:52.011564  153921 command_runner.go:130] > # enable_pod_events = false
	I0729 12:05:52.011570  153921 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 12:05:52.011578  153921 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 12:05:52.011585  153921 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 12:05:52.011591  153921 command_runner.go:130] > # default_runtime = "runc"
	I0729 12:05:52.011598  153921 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 12:05:52.011605  153921 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 12:05:52.011616  153921 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 12:05:52.011623  153921 command_runner.go:130] > # creation as a file is not desired either.
	I0729 12:05:52.011631  153921 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 12:05:52.011638  153921 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 12:05:52.011643  153921 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 12:05:52.011647  153921 command_runner.go:130] > # ]
	I0729 12:05:52.011653  153921 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 12:05:52.011662  153921 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 12:05:52.011667  153921 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 12:05:52.011674  153921 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 12:05:52.011678  153921 command_runner.go:130] > #
	I0729 12:05:52.011683  153921 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 12:05:52.011690  153921 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 12:05:52.011709  153921 command_runner.go:130] > # runtime_type = "oci"
	I0729 12:05:52.011716  153921 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 12:05:52.011721  153921 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 12:05:52.011726  153921 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 12:05:52.011730  153921 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 12:05:52.011736  153921 command_runner.go:130] > # monitor_env = []
	I0729 12:05:52.011740  153921 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 12:05:52.011745  153921 command_runner.go:130] > # allowed_annotations = []
	I0729 12:05:52.011751  153921 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 12:05:52.011756  153921 command_runner.go:130] > # Where:
	I0729 12:05:52.011762  153921 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 12:05:52.011770  153921 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 12:05:52.011776  153921 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 12:05:52.011788  153921 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 12:05:52.011792  153921 command_runner.go:130] > #   in $PATH.
	I0729 12:05:52.011799  153921 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 12:05:52.011806  153921 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 12:05:52.011812  153921 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 12:05:52.011818  153921 command_runner.go:130] > #   state.
	I0729 12:05:52.011825  153921 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 12:05:52.011833  153921 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 12:05:52.011839  153921 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 12:05:52.011846  153921 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 12:05:52.011852  153921 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 12:05:52.011860  153921 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 12:05:52.011866  153921 command_runner.go:130] > #   The currently recognized values are:
	I0729 12:05:52.011872  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 12:05:52.011881  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 12:05:52.011889  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 12:05:52.011897  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 12:05:52.011904  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 12:05:52.011912  153921 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 12:05:52.011918  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 12:05:52.011926  153921 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 12:05:52.011932  153921 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 12:05:52.011940  153921 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 12:05:52.011944  153921 command_runner.go:130] > #   deprecated option "conmon".
	I0729 12:05:52.011951  153921 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 12:05:52.011956  153921 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 12:05:52.011964  153921 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 12:05:52.011969  153921 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 12:05:52.011978  153921 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 12:05:52.011983  153921 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 12:05:52.011988  153921 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 12:05:52.011995  153921 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 12:05:52.011998  153921 command_runner.go:130] > #
	I0729 12:05:52.012003  153921 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 12:05:52.012006  153921 command_runner.go:130] > #
	I0729 12:05:52.012012  153921 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 12:05:52.012018  153921 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 12:05:52.012022  153921 command_runner.go:130] > #
	I0729 12:05:52.012028  153921 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 12:05:52.012036  153921 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 12:05:52.012039  153921 command_runner.go:130] > #
	I0729 12:05:52.012046  153921 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 12:05:52.012050  153921 command_runner.go:130] > # feature.
	I0729 12:05:52.012056  153921 command_runner.go:130] > #
	I0729 12:05:52.012061  153921 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 12:05:52.012069  153921 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 12:05:52.012076  153921 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 12:05:52.012084  153921 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 12:05:52.012090  153921 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 12:05:52.012095  153921 command_runner.go:130] > #
	I0729 12:05:52.012101  153921 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 12:05:52.012108  153921 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 12:05:52.012111  153921 command_runner.go:130] > #
	I0729 12:05:52.012117  153921 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 12:05:52.012124  153921 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 12:05:52.012127  153921 command_runner.go:130] > #
	I0729 12:05:52.012135  153921 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 12:05:52.012141  153921 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 12:05:52.012146  153921 command_runner.go:130] > # limitation.
	I0729 12:05:52.012151  153921 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 12:05:52.012158  153921 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 12:05:52.012161  153921 command_runner.go:130] > runtime_type = "oci"
	I0729 12:05:52.012166  153921 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 12:05:52.012172  153921 command_runner.go:130] > runtime_config_path = ""
	I0729 12:05:52.012177  153921 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 12:05:52.012182  153921 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 12:05:52.012185  153921 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 12:05:52.012191  153921 command_runner.go:130] > monitor_env = [
	I0729 12:05:52.012197  153921 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 12:05:52.012202  153921 command_runner.go:130] > ]
	I0729 12:05:52.012206  153921 command_runner.go:130] > privileged_without_host_devices = false
	I0729 12:05:52.012213  153921 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 12:05:52.012221  153921 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 12:05:52.012227  153921 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 12:05:52.012237  153921 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 12:05:52.012246  153921 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 12:05:52.012252  153921 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 12:05:52.012261  153921 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 12:05:52.012268  153921 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 12:05:52.012275  153921 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 12:05:52.012282  153921 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 12:05:52.012287  153921 command_runner.go:130] > # Example:
	I0729 12:05:52.012292  153921 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 12:05:52.012297  153921 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 12:05:52.012301  153921 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 12:05:52.012308  153921 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 12:05:52.012311  153921 command_runner.go:130] > # cpuset = 0
	I0729 12:05:52.012315  153921 command_runner.go:130] > # cpushares = "0-1"
	I0729 12:05:52.012318  153921 command_runner.go:130] > # Where:
	I0729 12:05:52.012323  153921 command_runner.go:130] > # The workload name is workload-type.
	I0729 12:05:52.012329  153921 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 12:05:52.012334  153921 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 12:05:52.012339  153921 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 12:05:52.012346  153921 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 12:05:52.012351  153921 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 12:05:52.012356  153921 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 12:05:52.012362  153921 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 12:05:52.012366  153921 command_runner.go:130] > # Default value is set to true
	I0729 12:05:52.012371  153921 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 12:05:52.012377  153921 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 12:05:52.012382  153921 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 12:05:52.012386  153921 command_runner.go:130] > # Default value is set to 'false'
	I0729 12:05:52.012389  153921 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 12:05:52.012395  153921 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 12:05:52.012398  153921 command_runner.go:130] > #
	I0729 12:05:52.012403  153921 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 12:05:52.012408  153921 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 12:05:52.012414  153921 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 12:05:52.012420  153921 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 12:05:52.012425  153921 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 12:05:52.012429  153921 command_runner.go:130] > [crio.image]
	I0729 12:05:52.012435  153921 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 12:05:52.012439  153921 command_runner.go:130] > # default_transport = "docker://"
	I0729 12:05:52.012444  153921 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 12:05:52.012450  153921 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 12:05:52.012454  153921 command_runner.go:130] > # global_auth_file = ""
	I0729 12:05:52.012459  153921 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 12:05:52.012463  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.012468  153921 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 12:05:52.012473  153921 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 12:05:52.012481  153921 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 12:05:52.012485  153921 command_runner.go:130] > # This option supports live configuration reload.
	I0729 12:05:52.012489  153921 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 12:05:52.012495  153921 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 12:05:52.012500  153921 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 12:05:52.012506  153921 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 12:05:52.012511  153921 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 12:05:52.012514  153921 command_runner.go:130] > # pause_command = "/pause"
	I0729 12:05:52.012520  153921 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 12:05:52.012526  153921 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 12:05:52.012532  153921 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 12:05:52.012539  153921 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 12:05:52.012547  153921 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 12:05:52.012553  153921 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 12:05:52.012560  153921 command_runner.go:130] > # pinned_images = [
	I0729 12:05:52.012563  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012569  153921 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 12:05:52.012577  153921 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 12:05:52.012584  153921 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 12:05:52.012592  153921 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 12:05:52.012598  153921 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 12:05:52.012602  153921 command_runner.go:130] > # signature_policy = ""
	I0729 12:05:52.012607  153921 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 12:05:52.012616  153921 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 12:05:52.012622  153921 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 12:05:52.012630  153921 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 12:05:52.012637  153921 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 12:05:52.012644  153921 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 12:05:52.012650  153921 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 12:05:52.012657  153921 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 12:05:52.012661  153921 command_runner.go:130] > # changing them here.
	I0729 12:05:52.012667  153921 command_runner.go:130] > # insecure_registries = [
	I0729 12:05:52.012671  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012677  153921 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 12:05:52.012683  153921 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 12:05:52.012688  153921 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 12:05:52.012695  153921 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 12:05:52.012699  153921 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 12:05:52.012707  153921 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 12:05:52.012711  153921 command_runner.go:130] > # CNI plugins.
	I0729 12:05:52.012714  153921 command_runner.go:130] > [crio.network]
	I0729 12:05:52.012720  153921 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 12:05:52.012727  153921 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 12:05:52.012731  153921 command_runner.go:130] > # cni_default_network = ""
	I0729 12:05:52.012739  153921 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 12:05:52.012743  153921 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 12:05:52.012751  153921 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 12:05:52.012755  153921 command_runner.go:130] > # plugin_dirs = [
	I0729 12:05:52.012761  153921 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 12:05:52.012763  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012769  153921 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 12:05:52.012773  153921 command_runner.go:130] > [crio.metrics]
	I0729 12:05:52.012779  153921 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 12:05:52.012787  153921 command_runner.go:130] > enable_metrics = true
	I0729 12:05:52.012791  153921 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 12:05:52.012798  153921 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 12:05:52.012806  153921 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 12:05:52.012811  153921 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 12:05:52.012817  153921 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 12:05:52.012823  153921 command_runner.go:130] > # metrics_collectors = [
	I0729 12:05:52.012827  153921 command_runner.go:130] > # 	"operations",
	I0729 12:05:52.012832  153921 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 12:05:52.012839  153921 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 12:05:52.012844  153921 command_runner.go:130] > # 	"operations_errors",
	I0729 12:05:52.012848  153921 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 12:05:52.012853  153921 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 12:05:52.012857  153921 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 12:05:52.012862  153921 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 12:05:52.012869  153921 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 12:05:52.012873  153921 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 12:05:52.012880  153921 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 12:05:52.012885  153921 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 12:05:52.012891  153921 command_runner.go:130] > # 	"containers_oom_total",
	I0729 12:05:52.012895  153921 command_runner.go:130] > # 	"containers_oom",
	I0729 12:05:52.012901  153921 command_runner.go:130] > # 	"processes_defunct",
	I0729 12:05:52.012905  153921 command_runner.go:130] > # 	"operations_total",
	I0729 12:05:52.012909  153921 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 12:05:52.012914  153921 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 12:05:52.012920  153921 command_runner.go:130] > # 	"operations_errors_total",
	I0729 12:05:52.012924  153921 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 12:05:52.012928  153921 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 12:05:52.012932  153921 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 12:05:52.012939  153921 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 12:05:52.012943  153921 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 12:05:52.012950  153921 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 12:05:52.012955  153921 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 12:05:52.012979  153921 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 12:05:52.012988  153921 command_runner.go:130] > # ]
	I0729 12:05:52.012996  153921 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 12:05:52.013005  153921 command_runner.go:130] > # metrics_port = 9090
	I0729 12:05:52.013011  153921 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 12:05:52.013018  153921 command_runner.go:130] > # metrics_socket = ""
	I0729 12:05:52.013023  153921 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 12:05:52.013031  153921 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 12:05:52.013038  153921 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 12:05:52.013045  153921 command_runner.go:130] > # certificate on any modification event.
	I0729 12:05:52.013049  153921 command_runner.go:130] > # metrics_cert = ""
	I0729 12:05:52.013056  153921 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 12:05:52.013062  153921 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 12:05:52.013068  153921 command_runner.go:130] > # metrics_key = ""
	I0729 12:05:52.013073  153921 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 12:05:52.013079  153921 command_runner.go:130] > [crio.tracing]
	I0729 12:05:52.013084  153921 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 12:05:52.013090  153921 command_runner.go:130] > # enable_tracing = false
	I0729 12:05:52.013095  153921 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 12:05:52.013102  153921 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 12:05:52.013108  153921 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 12:05:52.013115  153921 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 12:05:52.013120  153921 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 12:05:52.013125  153921 command_runner.go:130] > [crio.nri]
	I0729 12:05:52.013130  153921 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 12:05:52.013136  153921 command_runner.go:130] > # enable_nri = false
	I0729 12:05:52.013140  153921 command_runner.go:130] > # NRI socket to listen on.
	I0729 12:05:52.013264  153921 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 12:05:52.013285  153921 command_runner.go:130] > # NRI plugin directory to use.
	I0729 12:05:52.013292  153921 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 12:05:52.013298  153921 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 12:05:52.013312  153921 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 12:05:52.013324  153921 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 12:05:52.013333  153921 command_runner.go:130] > # nri_disable_connections = false
	I0729 12:05:52.013341  153921 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 12:05:52.013417  153921 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 12:05:52.013432  153921 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 12:05:52.013440  153921 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 12:05:52.013449  153921 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 12:05:52.013457  153921 command_runner.go:130] > [crio.stats]
	I0729 12:05:52.013463  153921 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 12:05:52.013471  153921 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 12:05:52.013476  153921 command_runner.go:130] > # stats_collection_period = 0
	I0729 12:05:52.013500  153921 command_runner.go:130] ! time="2024-07-29 12:05:51.972545700Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 12:05:52.013534  153921 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 12:05:52.013666  153921 cni.go:84] Creating CNI manager for ""
	I0729 12:05:52.013678  153921 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 12:05:52.013688  153921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:05:52.013736  153921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-293807 NodeName:multinode-293807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:05:52.013889  153921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-293807"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:05:52.013952  153921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:05:52.023560  153921 command_runner.go:130] > kubeadm
	I0729 12:05:52.023585  153921 command_runner.go:130] > kubectl
	I0729 12:05:52.023592  153921 command_runner.go:130] > kubelet
	I0729 12:05:52.023636  153921 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:05:52.023697  153921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:05:52.033341  153921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0729 12:05:52.050554  153921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:05:52.067704  153921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 12:05:52.084624  153921 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0729 12:05:52.088472  153921 command_runner.go:130] > 192.168.39.26	control-plane.minikube.internal
	I0729 12:05:52.088556  153921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:05:52.231649  153921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:05:52.246565  153921 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807 for IP: 192.168.39.26
	I0729 12:05:52.246593  153921 certs.go:194] generating shared ca certs ...
	I0729 12:05:52.246613  153921 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:05:52.246802  153921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 12:05:52.246857  153921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 12:05:52.246871  153921 certs.go:256] generating profile certs ...
	I0729 12:05:52.246968  153921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/client.key
	I0729 12:05:52.247047  153921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.key.e2d5216b
	I0729 12:05:52.247097  153921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.key
	I0729 12:05:52.247111  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 12:05:52.247131  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 12:05:52.247148  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 12:05:52.247165  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 12:05:52.247182  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 12:05:52.247201  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 12:05:52.247220  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 12:05:52.247236  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 12:05:52.247302  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 12:05:52.247345  153921 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 12:05:52.247357  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 12:05:52.247396  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:05:52.247429  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:05:52.247459  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 12:05:52.247514  153921 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:05:52.247560  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.247580  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem -> /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.247598  153921 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.248297  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:05:52.272716  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:05:52.296570  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:05:52.321612  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:05:52.346351  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 12:05:52.371499  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 12:05:52.396122  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:05:52.420978  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/multinode-293807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:05:52.446331  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:05:52.470989  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 12:05:52.494836  153921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 12:05:52.518812  153921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:05:52.535666  153921 ssh_runner.go:195] Run: openssl version
	I0729 12:05:52.541263  153921 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 12:05:52.541501  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 12:05:52.552515  153921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.556987  153921 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.557106  153921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.557168  153921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 12:05:52.562746  153921 command_runner.go:130] > 51391683
	I0729 12:05:52.562832  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 12:05:52.575448  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 12:05:52.600345  153921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.604749  153921 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.604879  153921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.604928  153921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 12:05:52.610627  153921 command_runner.go:130] > 3ec20f2e
	I0729 12:05:52.610704  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:05:52.620369  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:05:52.631361  153921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.635818  153921 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.635852  153921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.635905  153921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:05:52.641373  153921 command_runner.go:130] > b5213941
	I0729 12:05:52.641550  153921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:05:52.651087  153921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:05:52.655603  153921 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:05:52.655632  153921 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 12:05:52.655641  153921 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0729 12:05:52.655650  153921 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 12:05:52.655658  153921 command_runner.go:130] > Access: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655665  153921 command_runner.go:130] > Modify: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655672  153921 command_runner.go:130] > Change: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655677  153921 command_runner.go:130] >  Birth: 2024-07-29 11:59:09.216116967 +0000
	I0729 12:05:52.655755  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:05:52.661401  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.661493  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:05:52.667374  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.667463  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:05:52.673324  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.673520  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:05:52.679202  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.679293  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:05:52.684920  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.685019  153921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:05:52.690564  153921 command_runner.go:130] > Certificate will not expire
	I0729 12:05:52.690645  153921 kubeadm.go:392] StartCluster: {Name:multinode-293807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-293807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.54 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.164 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:05:52.690824  153921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:05:52.690903  153921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:05:52.725919  153921 command_runner.go:130] > c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf
	I0729 12:05:52.725953  153921 command_runner.go:130] > 8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728
	I0729 12:05:52.725963  153921 command_runner.go:130] > 3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97
	I0729 12:05:52.725975  153921 command_runner.go:130] > 8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7
	I0729 12:05:52.725983  153921 command_runner.go:130] > 6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7
	I0729 12:05:52.726028  153921 command_runner.go:130] > df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c
	I0729 12:05:52.726047  153921 command_runner.go:130] > fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4
	I0729 12:05:52.726114  153921 command_runner.go:130] > 876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f
	I0729 12:05:52.727682  153921 cri.go:89] found id: "c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf"
	I0729 12:05:52.727701  153921 cri.go:89] found id: "8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728"
	I0729 12:05:52.727706  153921 cri.go:89] found id: "3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97"
	I0729 12:05:52.727711  153921 cri.go:89] found id: "8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7"
	I0729 12:05:52.727714  153921 cri.go:89] found id: "6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7"
	I0729 12:05:52.727719  153921 cri.go:89] found id: "df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c"
	I0729 12:05:52.727722  153921 cri.go:89] found id: "fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4"
	I0729 12:05:52.727726  153921 cri.go:89] found id: "876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f"
	I0729 12:05:52.727730  153921 cri.go:89] found id: ""
	I0729 12:05:52.727794  153921 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.618977984Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255002618954309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=106ed4df-3ca8-40b3-9156-9f62ac4961f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.619723612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a06e087-09e0-44be-8d99-329c6bc29427 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.619785078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a06e087-09e0-44be-8d99-329c6bc29427 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.620117501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a06e087-09e0-44be-8d99-329c6bc29427 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.660139016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33c224e5-95a1-4d0d-831d-583df3cc0685 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.660217768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33c224e5-95a1-4d0d-831d-583df3cc0685 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.661156552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4db7d986-c343-4a49-9507-1f9df3a9dc36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.662015706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255002661986453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4db7d986-c343-4a49-9507-1f9df3a9dc36 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.662484002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1e2d0b8-a1a0-4bd5-beac-574d4d8f2f5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.662583941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1e2d0b8-a1a0-4bd5-beac-574d4d8f2f5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.663078679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1e2d0b8-a1a0-4bd5-beac-574d4d8f2f5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.706141499Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29840886-0741-44f0-ac1b-8a5a55317f29 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.706239745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29840886-0741-44f0-ac1b-8a5a55317f29 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.707681203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2c7c6ae-026f-429a-a441-a134d4a7b86a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.708090569Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255002708069562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2c7c6ae-026f-429a-a441-a134d4a7b86a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.708830703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c97c323e-978f-4040-b18f-e939e364bc38 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.708882225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c97c323e-978f-4040-b18f-e939e364bc38 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.709471806Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c97c323e-978f-4040-b18f-e939e364bc38 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.749914701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b63aa2c3-31b5-4413-bce6-3ac1c0d4693d name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.749990429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b63aa2c3-31b5-4413-bce6-3ac1c0d4693d name=/runtime.v1.RuntimeService/Version
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.751288645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bde1e6da-0d4b-466b-b544-8f8c2e7b7f97 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.751796253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255002751771574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bde1e6da-0d4b-466b-b544-8f8c2e7b7f97 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.752471392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b018f26-fa78-4b61-a50d-21986a49cfbd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.752530789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b018f26-fa78-4b61-a50d-21986a49cfbd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:10:02 multinode-293807 crio[2878]: time="2024-07-29 12:10:02.752844921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8f69c067b6438ecb6a0bb7af97b5d903c85ce20d31f04353f2ae2d7bbef8335b,PodSandboxId:9f404395fcb142a9b4456cf414d0b6425fa9d5d86326fc50ea7f7a94ba5c4f51,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722254792676658392,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227,PodSandboxId:8fd2eab287847ecbaedfb099bc70e8f0ec30d22e547d58a3e6d13db40b156658,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722254759055587993,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2,PodSandboxId:f9da110c3037e38586da452be5f4b8e1af60bf8b22ce19dd35ab010e7c884946,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722254759003117837,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a,PodSandboxId:9576a6db9cf5f32627e5a485077584d8f8ac571746a42f3bc5a2c1c448830f8c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722254758920729306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]
string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a5c8d6c2aa8b243d0e485f25218696c45f132d81ae93515aa708743cea4f2c,PodSandboxId:241012740a341c357472db9af2f02549409000560c3a4c95fb24f6344b7feeda,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722254758917845243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.ku
bernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0,PodSandboxId:5c21c4684ed60c57554b17f2724e26dcc708fb4a629fc0bef058e3bbb58f6d46,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722254755130353657,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},Annotations:map[string]
string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8,PodSandboxId:3c5be8bf6408e36d4457928eae6099d5ee65da62e8f47e4bf65f3ae8639b85da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722254755062006618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:map[string]string{io.kubernetes.container.hash: 12446c5
,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35,PodSandboxId:685c423ee78861fa26f6c582001d5df568f5621d32d16171c90161f962baa6b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722254755093644501,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e,PodSandboxId:eddda9de63feac095699511b75fda1f8edec8214f84c3f5ae981be1ae0bf47c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722254755055847122,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c02cce176e08cb044b43a748ca490abdfcfae6485a584e04fc72e9cc6cb94cc,PodSandboxId:0910f59549a24fb230cab625039a377bc21d63e933ed5dc57fbfba747ae0674e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722254439096243531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-tzhl8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2449333d-ddfd-4a44-a8a0-0d701e603c26,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4981ff,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf,PodSandboxId:e275b8d2f708481b07032d5f38763f42e28e161cfb73cf45d30c55ba20e2b4d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722254387942804976,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w4vb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be897904-1343-4ad4-a2f1-8e12137637cc,},Annotations:map[string]string{io.kubernetes.container.hash: 219ad8e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8746d4a660dc1eeb2bb695daeb7a90d29b7c2142b06fe39707ea71fb9c397728,PodSandboxId:07bdd82b9a9b80a3d842ce8654c2acc02d803e6afe43984d67af933788e3c664,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722254387903609595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 7d6946c3-cca0-47ca-bd10-618c715db560,},Annotations:map[string]string{io.kubernetes.container.hash: d91fab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97,PodSandboxId:a732bac4807fa1dbd1524a4a6fead81aa4168ccf9f06ab367ab49592d75e4a22,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722254376087197924,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-z96j2,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 0b01e79a-fb4c-4177-a131-6cb670645a7c,},Annotations:map[string]string{io.kubernetes.container.hash: e463d48f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7,PodSandboxId:2c256554ef1e40ada8ea9a0bd2ca5e1ba2000191b5426ae3f218c1508eed4b04,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722254372919105784,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5z2jx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 2d51aa0e-f3ce-4f29-9f05-1953193edbe7,},Annotations:map[string]string{io.kubernetes.container.hash: c4a2874f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7,PodSandboxId:482c57be2aacd2d1c65abc31fc83987cadbdfd2a13639fd4926c6d6d4e049dda,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722254353269733349,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dc
27ff891ce58d177b26e1011953683,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c,PodSandboxId:87de68fefdccebd4a1b9f2fe8ff3aa1749908206f41e06564a104942d394a0a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722254353264522127,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3162bee171561855101bcd9570a3c70,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 12446c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4,PodSandboxId:1356ca0a9f891da4560b095557c201e3232a1765c3a3988794022dde0f76d097,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722254353214194473,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a4b99a3145bfae572bc197482b38fad,},An
notations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f,PodSandboxId:df3427fe72c07b35621d6314b880440a09d9c9214b7e6ca8ceb0a372e066fe21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722254353193908031,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-293807,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90577a10b586077aa49f919798b4865a,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b018f26-fa78-4b61-a50d-21986a49cfbd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f69c067b6438       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9f404395fcb14       busybox-fc5497c4f-tzhl8
	90ba73282e10b       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   8fd2eab287847       kindnet-z96j2
	2d6f6b69bf3ee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   f9da110c3037e       coredns-7db6d8ff4d-w4vb7
	9603547d9c6e2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   9576a6db9cf5f       kube-proxy-5z2jx
	f7a5c8d6c2aa8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   241012740a341       storage-provisioner
	7737deecc681c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   5c21c4684ed60       kube-controller-manager-multinode-293807
	87435c2f87aa6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   685c423ee7886       kube-apiserver-multinode-293807
	e3ad017663bf3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   3c5be8bf6408e       etcd-multinode-293807
	ebb577a51515f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   eddda9de63fea       kube-scheduler-multinode-293807
	3c02cce176e08       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   0910f59549a24       busybox-fc5497c4f-tzhl8
	c1b0f5bdafedb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   e275b8d2f7084       coredns-7db6d8ff4d-w4vb7
	8746d4a660dc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   07bdd82b9a9b8       storage-provisioner
	3afb71673c939       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   a732bac4807fa       kindnet-z96j2
	8e90b9960f92b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   2c256554ef1e4       kube-proxy-5z2jx
	6b5caf26b3818       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   482c57be2aacd       kube-scheduler-multinode-293807
	df5165ac9d720       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   87de68fefdcce       etcd-multinode-293807
	fd4b90fabffac       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   1356ca0a9f891       kube-controller-manager-multinode-293807
	876b71f991cdd       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   df3427fe72c07       kube-apiserver-multinode-293807
	
	
	==> coredns [2d6f6b69bf3ee03e8b62e0500a953d5fe5ae6241dd7b720e3377d0a6945983e2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46783 - 40276 "HINFO IN 6339179047588870057.1204484978150539655. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027646527s
	
	
	==> coredns [c1b0f5bdafedbea976e4b0d3fa4a4b391847b6368dcc078a346dc58a9d99babf] <==
	[INFO] 10.244.1.2:57435 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001720036s
	[INFO] 10.244.1.2:52911 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076885s
	[INFO] 10.244.1.2:51395 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056026s
	[INFO] 10.244.1.2:45677 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158027s
	[INFO] 10.244.1.2:39978 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069551s
	[INFO] 10.244.1.2:35866 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057902s
	[INFO] 10.244.1.2:41919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047666s
	[INFO] 10.244.0.3:51370 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100434s
	[INFO] 10.244.0.3:57049 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000047853s
	[INFO] 10.244.0.3:51525 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083359s
	[INFO] 10.244.0.3:37573 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045916s
	[INFO] 10.244.1.2:52000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001752s
	[INFO] 10.244.1.2:52490 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067196s
	[INFO] 10.244.1.2:41028 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055802s
	[INFO] 10.244.1.2:60965 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053382s
	[INFO] 10.244.0.3:42163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013759s
	[INFO] 10.244.0.3:36364 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064566s
	[INFO] 10.244.0.3:56065 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000055972s
	[INFO] 10.244.0.3:57361 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000054674s
	[INFO] 10.244.1.2:58076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134849s
	[INFO] 10.244.1.2:53602 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147888s
	[INFO] 10.244.1.2:51496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079398s
	[INFO] 10.244.1.2:52210 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077452s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-293807
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-293807
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=multinode-293807
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_59_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:59:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-293807
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:09:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:05:58 +0000   Mon, 29 Jul 2024 11:59:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    multinode-293807
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa53a409ea1943e5bd1c7340d912bf1e
	  System UUID:                fa53a409-ea19-43e5-bd1c-7340d912bf1e
	  Boot ID:                    b3c0e91e-14f7-48ce-9b0a-53c67b3e5c58
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tzhl8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m26s
	  kube-system                 coredns-7db6d8ff4d-w4vb7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-293807                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-z96j2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-293807             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-293807    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-5z2jx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-293807             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-293807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-293807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-293807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-293807 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-293807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-293807 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-293807 event: Registered Node multinode-293807 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-293807 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-293807 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-293807 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-293807 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                node-controller  Node multinode-293807 event: Registered Node multinode-293807 in Controller
	
	
	Name:               multinode-293807-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-293807-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=multinode-293807
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T12_06_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:06:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-293807-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:07:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:08:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:08:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:08:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 12:07:09 +0000   Mon, 29 Jul 2024 12:08:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.54
	  Hostname:    multinode-293807-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 15392dc16ca04679b94d635da7e15880
	  System UUID:                15392dc1-6ca0-4679-b94d-635da7e15880
	  Boot ID:                    c920ba33-518d-4627-8729-cf0e88483791
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rjb65    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-8shlp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m48s
	  kube-system                 kube-proxy-gnh9j           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m48s (x2 over 9m48s)  kubelet          Node multinode-293807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m48s (x2 over 9m48s)  kubelet          Node multinode-293807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m48s (x2 over 9m48s)  kubelet          Node multinode-293807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m29s                  kubelet          Node multinode-293807-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-293807-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-293807-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-293807-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-293807-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-293807-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056874] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058077] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.160249] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.137247] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.257038] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.033608] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +4.351109] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +0.064620] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.988283] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.079563] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.110288] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[  +0.124963] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.343772] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 12:00] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 12:05] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.146130] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.181244] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.137269] systemd-fstab-generator[2835]: Ignoring "noauto" option for root device
	[  +0.285771] systemd-fstab-generator[2863]: Ignoring "noauto" option for root device
	[  +0.986999] systemd-fstab-generator[2961]: Ignoring "noauto" option for root device
	[  +2.058183] systemd-fstab-generator[3086]: Ignoring "noauto" option for root device
	[  +4.625212] kauditd_printk_skb: 184 callbacks suppressed
	[Jul29 12:06] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.548241] systemd-fstab-generator[3923]: Ignoring "noauto" option for root device
	[ +18.216685] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [df5165ac9d72046b5dab63a7bb596ee67c7f563d742106e2d566164703a2614c] <==
	{"level":"info","ts":"2024-07-29T11:59:13.622253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T12:00:15.400263Z","caller":"traceutil/trace.go:171","msg":"trace[1514114236] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"211.272224ms","start":"2024-07-29T12:00:15.188967Z","end":"2024-07-29T12:00:15.400239Z","steps":["trace[1514114236] 'process raft request'  (duration: 208.358939ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:00:15.403696Z","caller":"traceutil/trace.go:171","msg":"trace[1213000608] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"170.391755ms","start":"2024-07-29T12:00:15.233292Z","end":"2024-07-29T12:00:15.403684Z","steps":["trace[1213000608] 'process raft request'  (duration: 170.066271ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:00:24.403143Z","caller":"traceutil/trace.go:171","msg":"trace[902457188] linearizableReadLoop","detail":"{readStateIndex:551; appliedIndex:550; }","duration":"174.917552ms","start":"2024-07-29T12:00:24.228204Z","end":"2024-07-29T12:00:24.403122Z","steps":["trace[902457188] 'read index received'  (duration: 174.743556ms)","trace[902457188] 'applied index is now lower than readState.Index'  (duration: 173.237µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T12:00:24.403326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.101172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-07-29T12:00:24.403461Z","caller":"traceutil/trace.go:171","msg":"trace[549595892] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:524; }","duration":"175.249269ms","start":"2024-07-29T12:00:24.2282Z","end":"2024-07-29T12:00:24.403449Z","steps":["trace[549595892] 'agreement among raft nodes before linearized reading'  (duration: 175.074452ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:00:24.40348Z","caller":"traceutil/trace.go:171","msg":"trace[1192560475] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"186.331276ms","start":"2024-07-29T12:00:24.217135Z","end":"2024-07-29T12:00:24.403466Z","steps":["trace[1192560475] 'process raft request'  (duration: 185.85595ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:00:24.673725Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.563496ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12938156726643087140 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:520 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T12:00:24.67411Z","caller":"traceutil/trace.go:171","msg":"trace[1435156300] transaction","detail":"{read_only:false; response_revision:525; number_of_response:1; }","duration":"266.199725ms","start":"2024-07-29T12:00:24.407898Z","end":"2024-07-29T12:00:24.674098Z","steps":["trace[1435156300] 'process raft request'  (duration: 54.89868ms)","trace[1435156300] 'compare'  (duration: 210.339967ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:00:24.674475Z","caller":"traceutil/trace.go:171","msg":"trace[1197836704] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"265.217215ms","start":"2024-07-29T12:00:24.409248Z","end":"2024-07-29T12:00:24.674465Z","steps":["trace[1197836704] 'process raft request'  (duration: 264.628244ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:01:06.689934Z","caller":"traceutil/trace.go:171","msg":"trace[1219340090] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:642; }","duration":"151.346381ms","start":"2024-07-29T12:01:06.538572Z","end":"2024-07-29T12:01:06.689919Z","steps":["trace[1219340090] 'read index received'  (duration: 54.410411ms)","trace[1219340090] 'applied index is now lower than readState.Index'  (duration: 96.935147ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:01:06.690053Z","caller":"traceutil/trace.go:171","msg":"trace[1226795760] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"219.875855ms","start":"2024-07-29T12:01:06.47017Z","end":"2024-07-29T12:01:06.690045Z","steps":["trace[1226795760] 'process raft request'  (duration: 122.804216ms)","trace[1226795760] 'compare'  (duration: 96.827401ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:01:06.690214Z","caller":"traceutil/trace.go:171","msg":"trace[2129257977] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"192.560014ms","start":"2024-07-29T12:01:06.497648Z","end":"2024-07-29T12:01:06.690208Z","steps":["trace[2129257977] 'process raft request'  (duration: 192.242159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T12:01:06.690343Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.770711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-293807-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T12:01:06.690381Z","caller":"traceutil/trace.go:171","msg":"trace[888898775] range","detail":"{range_begin:/registry/minions/multinode-293807-m03; range_end:; response_count:1; response_revision:609; }","duration":"151.844472ms","start":"2024-07-29T12:01:06.538529Z","end":"2024-07-29T12:01:06.690374Z","steps":["trace[888898775] 'agreement among raft nodes before linearized reading'  (duration: 151.752389ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T12:04:19.007037Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T12:04:19.007096Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-293807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	{"level":"warn","ts":"2024-07-29T12:04:19.007224Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:04:19.007308Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:04:19.093614Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T12:04:19.093701Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T12:04:19.093756Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9867c1935b8b38d","current-leader-member-id":"c9867c1935b8b38d"}
	{"level":"info","ts":"2024-07-29T12:04:19.096577Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:04:19.096762Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:04:19.096804Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-293807","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	
	
	==> etcd [e3ad017663bf344289f2f515a43a65cc735f0b1e7ca966b460df94f93bf0c9a8] <==
	{"level":"info","ts":"2024-07-29T12:05:55.428582Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:05:55.42861Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:05:55.428919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d switched to configuration voters=(14521430496220066701)"}
	{"level":"info","ts":"2024-07-29T12:05:55.429023Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","added-peer-id":"c9867c1935b8b38d","added-peer-peer-urls":["https://192.168.39.26:2380"]}
	{"level":"info","ts":"2024-07-29T12:05:55.429156Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cfb77a10e566a07","local-member-id":"c9867c1935b8b38d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:05:55.429227Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:05:55.433513Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:05:55.437752Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c9867c1935b8b38d","initial-advertise-peer-urls":["https://192.168.39.26:2380"],"listen-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.26:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T12:05:55.437814Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T12:05:55.437902Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:05:55.438Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-07-29T12:05:56.772051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T12:05:56.772111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T12:05:56.772148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgPreVoteResp from c9867c1935b8b38d at term 2"}
	{"level":"info","ts":"2024-07-29T12:05:56.772162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.772168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgVoteResp from c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.772176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became leader at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.772183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9867c1935b8b38d elected leader c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-07-29T12:05:56.778754Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:05:56.778711Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c9867c1935b8b38d","local-member-attributes":"{Name:multinode-293807 ClientURLs:[https://192.168.39.26:2379]}","request-path":"/0/members/c9867c1935b8b38d/attributes","cluster-id":"8cfb77a10e566a07","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:05:56.779639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:05:56.779893Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:05:56.779908Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:05:56.780539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-07-29T12:05:56.781338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:10:03 up 11 min,  0 users,  load average: 0.06, 0.18, 0.11
	Linux multinode-293807 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3afb71673c9399ade03c30a3f634cb750706d8722564cd1ec4e2c309807e5b97] <==
	I0729 12:03:37.101897       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:03:47.108501       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:03:47.108606       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:03:47.108772       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:03:47.108889       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:03:47.108987       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:03:47.109009       1 main.go:299] handling current node
	I0729 12:03:57.109790       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:03:57.109835       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:03:57.109961       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:03:57.109984       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:03:57.110036       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:03:57.110056       1 main.go:299] handling current node
	I0729 12:04:07.109788       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:04:07.109888       1 main.go:299] handling current node
	I0729 12:04:07.109917       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:04:07.109936       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:04:07.110131       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:04:07.110178       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	I0729 12:04:17.108995       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:04:17.109044       1 main.go:299] handling current node
	I0729 12:04:17.109060       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:04:17.109066       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:04:17.109196       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I0729 12:04:17.109219       1 main.go:322] Node multinode-293807-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [90ba73282e10b9ee46d7003f6ccbd7e8fde2b1fb12f6b77bc53fcc217a23b227] <==
	I0729 12:08:59.897362       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:09:09.900210       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:09:09.900248       1 main.go:299] handling current node
	I0729 12:09:09.900263       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:09:09.900269       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:09:19.904847       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:09:19.904954       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:09:19.905092       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:09:19.905116       1 main.go:299] handling current node
	I0729 12:09:29.896760       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:09:29.896857       1 main.go:299] handling current node
	I0729 12:09:29.896885       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:09:29.896903       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:09:39.900458       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:09:39.900503       1 main.go:299] handling current node
	I0729 12:09:39.900518       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:09:39.900524       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:09:49.899162       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:09:49.899258       1 main.go:299] handling current node
	I0729 12:09:49.899287       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:09:49.899309       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	I0729 12:09:59.897022       1 main.go:295] Handling node with IPs: map[192.168.39.26:{}]
	I0729 12:09:59.897054       1 main.go:299] handling current node
	I0729 12:09:59.897070       1 main.go:295] Handling node with IPs: map[192.168.39.54:{}]
	I0729 12:09:59.897075       1 main.go:322] Node multinode-293807-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [87435c2f87aa6eaf5d39856b2b85194c451cc4c0aed10fed1bc0258f36d3ba35] <==
	I0729 12:05:58.055280       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:05:58.055328       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:05:58.055335       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:05:58.056879       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:05:58.060855       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:05:58.060895       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:05:58.060902       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:05:58.060908       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:05:58.070284       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:05:58.070358       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:05:58.070785       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0729 12:05:58.070865       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 12:05:58.079890       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 12:05:58.107613       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:05:58.107707       1 policy_source.go:224] refreshing policies
	I0729 12:05:58.107658       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:05:58.159607       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:05:58.966107       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 12:05:59.928130       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 12:06:00.050630       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 12:06:00.069814       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 12:06:00.204979       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 12:06:00.218850       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 12:06:10.798665       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 12:06:10.835036       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [876b71f991cddb6e2fe917017d68dbb62e253660f820eb83783229d6eb0f644f] <==
	I0729 11:59:17.707391       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 11:59:17.720243       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:59:18.047171       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 11:59:18.543142       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 11:59:18.560493       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 11:59:18.572933       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 11:59:31.702668       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 11:59:31.702668       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 11:59:31.809811       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 12:00:40.217274       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:33708: use of closed network connection
	E0729 12:00:40.383994       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:33722: use of closed network connection
	E0729 12:00:40.570599       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:33740: use of closed network connection
	E0729 12:00:40.730848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:49938: use of closed network connection
	E0729 12:00:40.891578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:49950: use of closed network connection
	E0729 12:00:41.046869       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:49976: use of closed network connection
	E0729 12:00:41.317287       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50002: use of closed network connection
	E0729 12:00:41.489698       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50022: use of closed network connection
	E0729 12:00:41.654062       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50038: use of closed network connection
	E0729 12:00:41.816020       1 conn.go:339] Error on socket receive: read tcp 192.168.39.26:8443->192.168.39.1:50054: use of closed network connection
	I0729 12:04:19.010710       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0729 12:04:19.036896       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.036975       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.037015       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.037075       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 12:04:19.037129       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7737deecc681c29844d9309e7c35cc28580fc2869196970b0c1d60834e7851d0] <==
	I0729 12:06:39.005006       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m02\" does not exist"
	I0729 12:06:39.022687       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m02" podCIDRs=["10.244.1.0/24"]
	I0729 12:06:40.922999       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.212µs"
	I0729 12:06:40.936169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.056µs"
	I0729 12:06:40.967119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.697µs"
	I0729 12:06:40.971892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.212µs"
	I0729 12:06:40.977832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.085µs"
	I0729 12:06:41.220437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="241.891µs"
	I0729 12:06:57.835640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:06:57.860683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.281µs"
	I0729 12:06:57.874586       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.406µs"
	I0729 12:07:00.271357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.047378ms"
	I0729 12:07:00.272529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.086µs"
	I0729 12:07:15.818238       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:07:17.195732       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:07:17.196043       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m03\" does not exist"
	I0729 12:07:17.214082       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m03" podCIDRs=["10.244.2.0/24"]
	I0729 12:07:36.201785       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:07:41.406491       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:08:20.941582       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.782809ms"
	I0729 12:08:20.941675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.116µs"
	I0729 12:08:30.789702       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6x7h4"
	I0729 12:08:30.815550       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6x7h4"
	I0729 12:08:30.815584       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-qdd9t"
	I0729 12:08:30.839817       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-qdd9t"
	
	
	==> kube-controller-manager [fd4b90fabffacc7893bc8d341d444e2849aa3234dcd1172880f74aa6f8cd12f4] <==
	I0729 12:00:15.406998       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m02\" does not exist"
	I0729 12:00:15.419185       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m02" podCIDRs=["10.244.1.0/24"]
	I0729 12:00:16.406352       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-293807-m02"
	I0729 12:00:34.721213       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:00:37.162496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.549042ms"
	I0729 12:00:37.191759       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.197938ms"
	I0729 12:00:37.192003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.422µs"
	I0729 12:00:37.196715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.756µs"
	I0729 12:00:39.262883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.741459ms"
	I0729 12:00:39.262957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.253µs"
	I0729 12:00:39.758726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.22304ms"
	I0729 12:00:39.759256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.871µs"
	I0729 12:01:06.693312       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:06.693905       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m03\" does not exist"
	I0729 12:01:06.731225       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m03" podCIDRs=["10.244.2.0/24"]
	I0729 12:01:11.426631       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-293807-m03"
	I0729 12:01:24.983236       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:53.620160       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:54.585517       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-293807-m03\" does not exist"
	I0729 12:01:54.587773       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:01:54.594946       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-293807-m03" podCIDRs=["10.244.3.0/24"]
	I0729 12:02:13.481659       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m02"
	I0729 12:02:51.481576       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-293807-m03"
	I0729 12:02:51.536234       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.110857ms"
	I0729 12:02:51.536388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.587µs"
	
	
	==> kube-proxy [8e90b9960f92bf0a6d0233894f4fce2dcb8e88d592c1e88d08c4528d0de0c7b7] <==
	I0729 11:59:33.369150       1 server_linux.go:69] "Using iptables proxy"
	I0729 11:59:33.404538       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	I0729 11:59:33.458175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 11:59:33.458215       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 11:59:33.458232       1 server_linux.go:165] "Using iptables Proxier"
	I0729 11:59:33.462236       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 11:59:33.462685       1 server.go:872] "Version info" version="v1.30.3"
	I0729 11:59:33.462741       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:59:33.464388       1 config.go:192] "Starting service config controller"
	I0729 11:59:33.464752       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 11:59:33.464821       1 config.go:101] "Starting endpoint slice config controller"
	I0729 11:59:33.464840       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 11:59:33.468062       1 config.go:319] "Starting node config controller"
	I0729 11:59:33.468166       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 11:59:33.565398       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 11:59:33.565465       1 shared_informer.go:320] Caches are synced for service config
	I0729 11:59:33.568498       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9603547d9c6e2a2e316e3f52e65e93471bfd4c4a0adf42690df43bca8f48d30a] <==
	I0729 12:05:59.271315       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:05:59.296946       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	I0729 12:05:59.343625       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:05:59.343755       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:05:59.343809       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:05:59.346530       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:05:59.347547       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:05:59.350867       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:05:59.355062       1 config.go:192] "Starting service config controller"
	I0729 12:05:59.360135       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:05:59.360162       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:05:59.358972       1 config.go:319] "Starting node config controller"
	I0729 12:05:59.360217       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:05:59.355173       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:05:59.362174       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:05:59.362182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:05:59.461251       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6b5caf26b381857bf9414a2a52c7577b7bdb8e959f769eab1b0f26aeab5ab1e7] <==
	E0729 11:59:16.077393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:59:16.076197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:16.077466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:16.906369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:59:16.906405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:59:16.959735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:16.959781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:16.977951       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:16.977996       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:17.043374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:59:17.043445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:59:17.054071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:59:17.054121       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:59:17.090994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:59:17.091041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:59:17.171973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:59:17.172016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:59:17.317569       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:59:17.317616       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:59:17.330877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:59:17.330985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:59:17.336703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:59:17.336795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0729 11:59:19.763683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 12:04:19.013166       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ebb577a51515f8d3a66c6a8db1c70cadc89100a42310c5ad35badf7ed786930e] <==
	I0729 12:05:56.008131       1 serving.go:380] Generated self-signed cert in-memory
	I0729 12:05:58.073779       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 12:05:58.073811       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:05:58.082224       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 12:05:58.082288       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0729 12:05:58.082294       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0729 12:05:58.082321       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:05:58.082907       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 12:05:58.082936       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 12:05:58.082950       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0729 12:05:58.082956       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 12:05:58.182410       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0729 12:05:58.183803       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 12:05:58.183805       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.449834    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d51aa0e-f3ce-4f29-9f05-1953193edbe7-lib-modules\") pod \"kube-proxy-5z2jx\" (UID: \"2d51aa0e-f3ce-4f29-9f05-1953193edbe7\") " pod="kube-system/kube-proxy-5z2jx"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.450169    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b01e79a-fb4c-4177-a131-6cb670645a7c-xtables-lock\") pod \"kindnet-z96j2\" (UID: \"0b01e79a-fb4c-4177-a131-6cb670645a7c\") " pod="kube-system/kindnet-z96j2"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.450320    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d51aa0e-f3ce-4f29-9f05-1953193edbe7-xtables-lock\") pod \"kube-proxy-5z2jx\" (UID: \"2d51aa0e-f3ce-4f29-9f05-1953193edbe7\") " pod="kube-system/kube-proxy-5z2jx"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.450775    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0b01e79a-fb4c-4177-a131-6cb670645a7c-cni-cfg\") pod \"kindnet-z96j2\" (UID: \"0b01e79a-fb4c-4177-a131-6cb670645a7c\") " pod="kube-system/kindnet-z96j2"
	Jul 29 12:05:58 multinode-293807 kubelet[3093]: I0729 12:05:58.451221    3093 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b01e79a-fb4c-4177-a131-6cb670645a7c-lib-modules\") pod \"kindnet-z96j2\" (UID: \"0b01e79a-fb4c-4177-a131-6cb670645a7c\") " pod="kube-system/kindnet-z96j2"
	Jul 29 12:06:54 multinode-293807 kubelet[3093]: E0729 12:06:54.456608    3093 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:06:54 multinode-293807 kubelet[3093]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:07:54 multinode-293807 kubelet[3093]: E0729 12:07:54.456821    3093 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:07:54 multinode-293807 kubelet[3093]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:07:54 multinode-293807 kubelet[3093]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:07:54 multinode-293807 kubelet[3093]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:07:54 multinode-293807 kubelet[3093]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:08:54 multinode-293807 kubelet[3093]: E0729 12:08:54.458001    3093 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:08:54 multinode-293807 kubelet[3093]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:08:54 multinode-293807 kubelet[3093]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:08:54 multinode-293807 kubelet[3093]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:08:54 multinode-293807 kubelet[3093]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 12:09:54 multinode-293807 kubelet[3093]: E0729 12:09:54.456334    3093 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 12:09:54 multinode-293807 kubelet[3093]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 12:09:54 multinode-293807 kubelet[3093]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 12:09:54 multinode-293807 kubelet[3093]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 12:09:54 multinode-293807 kubelet[3093]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:10:02.363445  155851 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19336-113730/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-293807 -n multinode-293807
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-293807 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                    
x
+
TestPreload (167.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-988528 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 12:14:27.394553  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-988528 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.43462955s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-988528 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-988528 image pull gcr.io/k8s-minikube/busybox: (1.732469554s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-988528
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-988528: (7.290081681s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-988528 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-988528 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.806800435s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-988528 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-29 12:17:01.338311494 +0000 UTC m=+5477.802843329
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-988528 -n test-preload-988528
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-988528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-988528 logs -n 25: (1.077505312s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807 sudo cat                                       | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m03_multinode-293807.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt                       | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m02:/home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n                                                                 | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | multinode-293807-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-293807 ssh -n multinode-293807-m02 sudo cat                                   | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	|         | /home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-293807 node stop m03                                                          | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:01 UTC |
	| node    | multinode-293807 node start                                                             | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:01 UTC | 29 Jul 24 12:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-293807                                                                | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	| stop    | -p multinode-293807                                                                     | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:02 UTC |                     |
	| start   | -p multinode-293807                                                                     | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:04 UTC | 29 Jul 24 12:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-293807                                                                | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC |                     |
	| node    | multinode-293807 node delete                                                            | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC | 29 Jul 24 12:07 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-293807 stop                                                                   | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:07 UTC |                     |
	| start   | -p multinode-293807                                                                     | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:10 UTC | 29 Jul 24 12:13 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-293807                                                                | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:13 UTC |                     |
	| start   | -p multinode-293807-m02                                                                 | multinode-293807-m02 | jenkins | v1.33.1 | 29 Jul 24 12:13 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-293807-m03                                                                 | multinode-293807-m03 | jenkins | v1.33.1 | 29 Jul 24 12:13 UTC | 29 Jul 24 12:14 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-293807                                                                 | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:14 UTC |                     |
	| delete  | -p multinode-293807-m03                                                                 | multinode-293807-m03 | jenkins | v1.33.1 | 29 Jul 24 12:14 UTC | 29 Jul 24 12:14 UTC |
	| delete  | -p multinode-293807                                                                     | multinode-293807     | jenkins | v1.33.1 | 29 Jul 24 12:14 UTC | 29 Jul 24 12:14 UTC |
	| start   | -p test-preload-988528                                                                  | test-preload-988528  | jenkins | v1.33.1 | 29 Jul 24 12:14 UTC | 29 Jul 24 12:15 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-988528 image pull                                                          | test-preload-988528  | jenkins | v1.33.1 | 29 Jul 24 12:15 UTC | 29 Jul 24 12:15 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-988528                                                                  | test-preload-988528  | jenkins | v1.33.1 | 29 Jul 24 12:15 UTC | 29 Jul 24 12:16 UTC |
	| start   | -p test-preload-988528                                                                  | test-preload-988528  | jenkins | v1.33.1 | 29 Jul 24 12:16 UTC | 29 Jul 24 12:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-988528 image list                                                          | test-preload-988528  | jenkins | v1.33.1 | 29 Jul 24 12:17 UTC | 29 Jul 24 12:17 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:16:03
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:16:03.350468  158316 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:16:03.350594  158316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:16:03.350604  158316 out.go:304] Setting ErrFile to fd 2...
	I0729 12:16:03.350611  158316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:16:03.350803  158316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:16:03.351346  158316 out.go:298] Setting JSON to false
	I0729 12:16:03.352246  158316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7114,"bootTime":1722248249,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:16:03.352311  158316 start.go:139] virtualization: kvm guest
	I0729 12:16:03.354752  158316 out.go:177] * [test-preload-988528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:16:03.356475  158316 notify.go:220] Checking for updates...
	I0729 12:16:03.356551  158316 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:16:03.357771  158316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:16:03.359030  158316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:16:03.360344  158316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:16:03.361803  158316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:16:03.363108  158316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:16:03.364771  158316 config.go:182] Loaded profile config "test-preload-988528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 12:16:03.365234  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:03.365305  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:03.379870  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I0729 12:16:03.380286  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:03.380791  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:03.380823  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:03.381223  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:03.381436  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:03.383376  158316 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 12:16:03.384760  158316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:16:03.385095  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:03.385138  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:03.399653  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0729 12:16:03.400076  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:03.400878  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:03.400940  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:03.401917  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:03.402104  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:03.435687  158316 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:16:03.437096  158316 start.go:297] selected driver: kvm2
	I0729 12:16:03.437112  158316 start.go:901] validating driver "kvm2" against &{Name:test-preload-988528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-988528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:16:03.437231  158316 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:16:03.437986  158316 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:16:03.438063  158316 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:16:03.452626  158316 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:16:03.452947  158316 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:16:03.453019  158316 cni.go:84] Creating CNI manager for ""
	I0729 12:16:03.453034  158316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:16:03.453092  158316 start.go:340] cluster config:
	{Name:test-preload-988528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-988528 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:16:03.453183  158316 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:16:03.455584  158316 out.go:177] * Starting "test-preload-988528" primary control-plane node in "test-preload-988528" cluster
	I0729 12:16:03.456924  158316 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 12:16:03.485519  158316 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 12:16:03.485547  158316 cache.go:56] Caching tarball of preloaded images
	I0729 12:16:03.485681  158316 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 12:16:03.487342  158316 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0729 12:16:03.488483  158316 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:16:03.514543  158316 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 12:16:06.735054  158316 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:16:06.735154  158316 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 12:16:07.592672  158316 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0729 12:16:07.592808  158316 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/config.json ...
	I0729 12:16:07.593087  158316 start.go:360] acquireMachinesLock for test-preload-988528: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:16:07.593155  158316 start.go:364] duration metric: took 43.483µs to acquireMachinesLock for "test-preload-988528"
	I0729 12:16:07.593170  158316 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:16:07.593176  158316 fix.go:54] fixHost starting: 
	I0729 12:16:07.593488  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:07.593513  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:07.608058  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0729 12:16:07.608469  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:07.608983  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:07.609015  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:07.609346  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:07.609550  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:07.609823  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetState
	I0729 12:16:07.611580  158316 fix.go:112] recreateIfNeeded on test-preload-988528: state=Stopped err=<nil>
	I0729 12:16:07.611597  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	W0729 12:16:07.611751  158316 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:16:07.613666  158316 out.go:177] * Restarting existing kvm2 VM for "test-preload-988528" ...
	I0729 12:16:07.614829  158316 main.go:141] libmachine: (test-preload-988528) Calling .Start
	I0729 12:16:07.614967  158316 main.go:141] libmachine: (test-preload-988528) Ensuring networks are active...
	I0729 12:16:07.615681  158316 main.go:141] libmachine: (test-preload-988528) Ensuring network default is active
	I0729 12:16:07.615997  158316 main.go:141] libmachine: (test-preload-988528) Ensuring network mk-test-preload-988528 is active
	I0729 12:16:07.616404  158316 main.go:141] libmachine: (test-preload-988528) Getting domain xml...
	I0729 12:16:07.617250  158316 main.go:141] libmachine: (test-preload-988528) Creating domain...
	I0729 12:16:08.783121  158316 main.go:141] libmachine: (test-preload-988528) Waiting to get IP...
	I0729 12:16:08.784135  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:08.784555  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:08.784622  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:08.784542  158367 retry.go:31] will retry after 230.687483ms: waiting for machine to come up
	I0729 12:16:09.017112  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:09.017584  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:09.017617  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:09.017522  158367 retry.go:31] will retry after 270.290523ms: waiting for machine to come up
	I0729 12:16:09.288973  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:09.289329  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:09.289358  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:09.289280  158367 retry.go:31] will retry after 348.982227ms: waiting for machine to come up
	I0729 12:16:09.639766  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:09.640174  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:09.640206  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:09.640116  158367 retry.go:31] will retry after 514.636294ms: waiting for machine to come up
	I0729 12:16:10.156885  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:10.157266  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:10.157295  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:10.157214  158367 retry.go:31] will retry after 668.291092ms: waiting for machine to come up
	I0729 12:16:10.827109  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:10.827498  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:10.827514  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:10.827475  158367 retry.go:31] will retry after 785.587048ms: waiting for machine to come up
	I0729 12:16:11.614494  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:11.614901  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:11.614937  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:11.614852  158367 retry.go:31] will retry after 1.092324831s: waiting for machine to come up
	I0729 12:16:12.708517  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:12.708858  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:12.708914  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:12.708836  158367 retry.go:31] will retry after 1.48223256s: waiting for machine to come up
	I0729 12:16:14.193541  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:14.193944  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:14.193970  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:14.193847  158367 retry.go:31] will retry after 1.837353885s: waiting for machine to come up
	I0729 12:16:16.033859  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:16.034206  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:16.034235  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:16.034151  158367 retry.go:31] will retry after 2.008320897s: waiting for machine to come up
	I0729 12:16:18.043716  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:18.044187  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:18.044209  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:18.044147  158367 retry.go:31] will retry after 1.89809634s: waiting for machine to come up
	I0729 12:16:19.945067  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:19.945358  158316 main.go:141] libmachine: (test-preload-988528) DBG | unable to find current IP address of domain test-preload-988528 in network mk-test-preload-988528
	I0729 12:16:19.945385  158316 main.go:141] libmachine: (test-preload-988528) DBG | I0729 12:16:19.945305  158367 retry.go:31] will retry after 3.410906503s: waiting for machine to come up
	I0729 12:16:23.357637  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.358135  158316 main.go:141] libmachine: (test-preload-988528) Found IP for machine: 192.168.39.195
	I0729 12:16:23.358246  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has current primary IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.358268  158316 main.go:141] libmachine: (test-preload-988528) Reserving static IP address...
	I0729 12:16:23.358710  158316 main.go:141] libmachine: (test-preload-988528) Reserved static IP address: 192.168.39.195
	I0729 12:16:23.358733  158316 main.go:141] libmachine: (test-preload-988528) Waiting for SSH to be available...
	I0729 12:16:23.358779  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "test-preload-988528", mac: "52:54:00:d5:4b:f5", ip: "192.168.39.195"} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.358814  158316 main.go:141] libmachine: (test-preload-988528) DBG | skip adding static IP to network mk-test-preload-988528 - found existing host DHCP lease matching {name: "test-preload-988528", mac: "52:54:00:d5:4b:f5", ip: "192.168.39.195"}
	I0729 12:16:23.358831  158316 main.go:141] libmachine: (test-preload-988528) DBG | Getting to WaitForSSH function...
	I0729 12:16:23.361086  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.361445  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.361469  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.361667  158316 main.go:141] libmachine: (test-preload-988528) DBG | Using SSH client type: external
	I0729 12:16:23.361688  158316 main.go:141] libmachine: (test-preload-988528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa (-rw-------)
	I0729 12:16:23.361726  158316 main.go:141] libmachine: (test-preload-988528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:16:23.361740  158316 main.go:141] libmachine: (test-preload-988528) DBG | About to run SSH command:
	I0729 12:16:23.361752  158316 main.go:141] libmachine: (test-preload-988528) DBG | exit 0
	I0729 12:16:23.484655  158316 main.go:141] libmachine: (test-preload-988528) DBG | SSH cmd err, output: <nil>: 
	I0729 12:16:23.485078  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetConfigRaw
	I0729 12:16:23.485720  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetIP
	I0729 12:16:23.487926  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.488273  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.488306  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.488576  158316 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/config.json ...
	I0729 12:16:23.488800  158316 machine.go:94] provisionDockerMachine start ...
	I0729 12:16:23.488825  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:23.489052  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:23.490984  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.491289  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.491319  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.491456  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:23.491632  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:23.491795  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:23.491942  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:23.492082  158316 main.go:141] libmachine: Using SSH client type: native
	I0729 12:16:23.492263  158316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0729 12:16:23.492273  158316 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:16:23.592936  158316 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 12:16:23.592996  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetMachineName
	I0729 12:16:23.593250  158316 buildroot.go:166] provisioning hostname "test-preload-988528"
	I0729 12:16:23.593284  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetMachineName
	I0729 12:16:23.593503  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:23.595887  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.596248  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.596273  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.596417  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:23.596621  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:23.596776  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:23.596953  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:23.597249  158316 main.go:141] libmachine: Using SSH client type: native
	I0729 12:16:23.597515  158316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0729 12:16:23.597535  158316 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-988528 && echo "test-preload-988528" | sudo tee /etc/hostname
	I0729 12:16:23.709691  158316 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-988528
	
	I0729 12:16:23.709724  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:23.712350  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.712774  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.712813  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.713014  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:23.713234  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:23.713391  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:23.713536  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:23.713683  158316 main.go:141] libmachine: Using SSH client type: native
	I0729 12:16:23.713841  158316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0729 12:16:23.713856  158316 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-988528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-988528/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-988528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:16:23.820743  158316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:16:23.820798  158316 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 12:16:23.820835  158316 buildroot.go:174] setting up certificates
	I0729 12:16:23.820850  158316 provision.go:84] configureAuth start
	I0729 12:16:23.820867  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetMachineName
	I0729 12:16:23.821168  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetIP
	I0729 12:16:23.824037  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.824430  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.824481  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.824581  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:23.826636  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.826987  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.827021  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.827140  158316 provision.go:143] copyHostCerts
	I0729 12:16:23.827194  158316 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 12:16:23.827205  158316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:16:23.827284  158316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 12:16:23.827408  158316 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 12:16:23.827420  158316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:16:23.827457  158316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 12:16:23.827533  158316 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 12:16:23.827543  158316 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:16:23.827571  158316 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 12:16:23.827635  158316 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.test-preload-988528 san=[127.0.0.1 192.168.39.195 localhost minikube test-preload-988528]
	I0729 12:16:23.952673  158316 provision.go:177] copyRemoteCerts
	I0729 12:16:23.952745  158316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:16:23.952770  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:23.955204  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.955459  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:23.955496  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:23.955636  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:23.955795  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:23.955931  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:23.956053  158316 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa Username:docker}
	I0729 12:16:24.034738  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:16:24.057734  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:16:24.080097  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 12:16:24.102083  158316 provision.go:87] duration metric: took 281.216509ms to configureAuth
	I0729 12:16:24.102119  158316 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:16:24.102306  158316 config.go:182] Loaded profile config "test-preload-988528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 12:16:24.102384  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:24.104845  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.105161  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:24.105183  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.105356  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:24.105539  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:24.105713  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:24.105866  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:24.106015  158316 main.go:141] libmachine: Using SSH client type: native
	I0729 12:16:24.106166  158316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0729 12:16:24.106180  158316 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:16:24.357207  158316 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:16:24.357239  158316 machine.go:97] duration metric: took 868.422112ms to provisionDockerMachine
	I0729 12:16:24.357255  158316 start.go:293] postStartSetup for "test-preload-988528" (driver="kvm2")
	I0729 12:16:24.357267  158316 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:16:24.357284  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:24.357599  158316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:16:24.357648  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:24.360537  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.360841  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:24.360867  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.361036  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:24.361234  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:24.361496  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:24.361683  158316 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa Username:docker}
	I0729 12:16:24.443120  158316 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:16:24.447184  158316 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:16:24.447214  158316 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 12:16:24.447346  158316 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 12:16:24.447425  158316 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 12:16:24.447511  158316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:16:24.456365  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:16:24.479176  158316 start.go:296] duration metric: took 121.904732ms for postStartSetup
	I0729 12:16:24.479228  158316 fix.go:56] duration metric: took 16.886051603s for fixHost
	I0729 12:16:24.479252  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:24.481747  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.482051  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:24.482079  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.482249  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:24.482463  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:24.482628  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:24.482769  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:24.482931  158316 main.go:141] libmachine: Using SSH client type: native
	I0729 12:16:24.483092  158316 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0729 12:16:24.483102  158316 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:16:24.585385  158316 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722255384.558365979
	
	I0729 12:16:24.585415  158316 fix.go:216] guest clock: 1722255384.558365979
	I0729 12:16:24.585425  158316 fix.go:229] Guest: 2024-07-29 12:16:24.558365979 +0000 UTC Remote: 2024-07-29 12:16:24.479233024 +0000 UTC m=+21.163494410 (delta=79.132955ms)
	I0729 12:16:24.585454  158316 fix.go:200] guest clock delta is within tolerance: 79.132955ms
	I0729 12:16:24.585462  158316 start.go:83] releasing machines lock for "test-preload-988528", held for 16.992297558s
	I0729 12:16:24.585480  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:24.585740  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetIP
	I0729 12:16:24.588318  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.588671  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:24.588694  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.588918  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:24.589443  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:24.589638  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:24.589740  158316 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:16:24.589807  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:24.589826  158316 ssh_runner.go:195] Run: cat /version.json
	I0729 12:16:24.589841  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:24.592416  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.592662  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.592731  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:24.592755  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.592932  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:24.593058  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:24.593080  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:24.593141  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:24.593246  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:24.593315  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:24.593370  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:24.593437  158316 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa Username:docker}
	I0729 12:16:24.593492  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:24.593618  158316 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa Username:docker}
	I0729 12:16:24.669736  158316 ssh_runner.go:195] Run: systemctl --version
	I0729 12:16:24.690218  158316 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:16:24.833655  158316 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:16:24.839219  158316 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:16:24.839293  158316 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:16:24.855269  158316 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:16:24.855304  158316 start.go:495] detecting cgroup driver to use...
	I0729 12:16:24.855387  158316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:16:24.871974  158316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:16:24.886071  158316 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:16:24.886145  158316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:16:24.900133  158316 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:16:24.913818  158316 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:16:25.023082  158316 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:16:25.175581  158316 docker.go:233] disabling docker service ...
	I0729 12:16:25.175652  158316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:16:25.192293  158316 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:16:25.204691  158316 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:16:25.321298  158316 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:16:25.433311  158316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:16:25.447290  158316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:16:25.464886  158316 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0729 12:16:25.464950  158316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:16:25.474671  158316 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:16:25.474736  158316 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:16:25.484518  158316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:16:25.494209  158316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:16:25.504069  158316 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:16:25.513973  158316 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:16:25.524482  158316 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:16:25.541320  158316 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:16:25.551629  158316 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:16:25.560762  158316 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:16:25.560838  158316 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:16:25.572856  158316 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:16:25.581945  158316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:16:25.687301  158316 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:16:25.816994  158316 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:16:25.817069  158316 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:16:25.821458  158316 start.go:563] Will wait 60s for crictl version
	I0729 12:16:25.821531  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:25.824865  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:16:25.858834  158316 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:16:25.858905  158316 ssh_runner.go:195] Run: crio --version
	I0729 12:16:25.885267  158316 ssh_runner.go:195] Run: crio --version
	I0729 12:16:25.914061  158316 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0729 12:16:25.915385  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetIP
	I0729 12:16:25.918043  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:25.918319  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:25.918348  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:25.918530  158316 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:16:25.922505  158316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:16:25.934091  158316 kubeadm.go:883] updating cluster {Name:test-preload-988528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-988528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:16:25.934220  158316 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 12:16:25.934262  158316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:16:25.966937  158316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 12:16:25.967003  158316 ssh_runner.go:195] Run: which lz4
	I0729 12:16:25.970798  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 12:16:25.974971  158316 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:16:25.975002  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0729 12:16:27.297990  158316 crio.go:462] duration metric: took 1.327218329s to copy over tarball
	I0729 12:16:27.298079  158316 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:16:29.604168  158316 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.306046604s)
	I0729 12:16:29.604202  158316 crio.go:469] duration metric: took 2.306182725s to extract the tarball
	I0729 12:16:29.604213  158316 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 12:16:29.643903  158316 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:16:29.683937  158316 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 12:16:29.683964  158316 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 12:16:29.684010  158316 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:16:29.684043  158316 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 12:16:29.684081  158316 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 12:16:29.684109  158316 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 12:16:29.684168  158316 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 12:16:29.684203  158316 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 12:16:29.684203  158316 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 12:16:29.684220  158316 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 12:16:29.685752  158316 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 12:16:29.685751  158316 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 12:16:29.685753  158316 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 12:16:29.685760  158316 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 12:16:29.685816  158316 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 12:16:29.685748  158316 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:16:29.685751  158316 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 12:16:29.685752  158316 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 12:16:29.822040  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 12:16:29.826119  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0729 12:16:29.827659  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 12:16:29.830726  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0729 12:16:29.839108  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 12:16:29.872802  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 12:16:29.892723  158316 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0729 12:16:29.892773  158316 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 12:16:29.892813  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:29.905301  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0729 12:16:29.969764  158316 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0729 12:16:29.969806  158316 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 12:16:29.969844  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:29.979945  158316 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0729 12:16:29.979982  158316 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 12:16:29.980026  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:29.980914  158316 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0729 12:16:29.980949  158316 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 12:16:29.980986  158316 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0729 12:16:29.981005  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:29.981015  158316 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0729 12:16:29.981048  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:29.993432  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0729 12:16:29.993467  158316 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0729 12:16:29.993501  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0729 12:16:29.993510  158316 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 12:16:29.993516  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 12:16:29.993521  158316 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0729 12:16:29.993546  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:29.993546  158316 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 12:16:29.993652  158316 ssh_runner.go:195] Run: which crictl
	I0729 12:16:29.993673  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0729 12:16:29.993886  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0729 12:16:30.087068  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 12:16:30.087101  158316 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0729 12:16:30.096061  158316 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 12:16:30.096086  158316 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0729 12:16:30.096087  158316 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 12:16:30.096175  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 12:16:30.096176  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 12:16:30.096176  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 12:16:30.098417  158316 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0729 12:16:30.098472  158316 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 12:16:30.098486  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 12:16:30.098543  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 12:16:30.145518  158316 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 12:16:30.145549  158316 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 12:16:30.145563  158316 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0729 12:16:30.145574  158316 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 12:16:30.145606  158316 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0729 12:16:30.145616  158316 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 12:16:30.145637  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 12:16:30.145639  158316 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 12:16:30.145638  158316 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0729 12:16:30.145667  158316 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0729 12:16:30.145684  158316 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0729 12:16:30.352595  158316 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:16:33.402887  158316 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.257242509s)
	I0729 12:16:33.402923  158316 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0729 12:16:33.402943  158316 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 12:16:33.402946  158316 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.257286047s)
	I0729 12:16:33.402980  158316 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0729 12:16:33.402984  158316 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0729 12:16:33.403024  158316 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.257369667s)
	I0729 12:16:33.403044  158316 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0729 12:16:33.403071  158316 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.050439727s)
	I0729 12:16:35.649700  158316 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.246691828s)
	I0729 12:16:35.649735  158316 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 12:16:35.649763  158316 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 12:16:35.649818  158316 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0729 12:16:35.986494  158316 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 12:16:35.986541  158316 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 12:16:35.986599  158316 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0729 12:16:36.125665  158316 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0729 12:16:36.125705  158316 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 12:16:36.125812  158316 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 12:16:36.564769  158316 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0729 12:16:36.564826  158316 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 12:16:36.564902  158316 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 12:16:37.710996  158316 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (1.146067716s)
	I0729 12:16:37.711025  158316 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0729 12:16:37.711050  158316 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 12:16:37.711089  158316 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 12:16:38.556180  158316 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0729 12:16:38.556225  158316 cache_images.go:123] Successfully loaded all cached images
	I0729 12:16:38.556230  158316 cache_images.go:92] duration metric: took 8.872254933s to LoadCachedImages
	I0729 12:16:38.556243  158316 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.24.4 crio true true} ...
	I0729 12:16:38.556347  158316 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-988528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-988528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:16:38.556413  158316 ssh_runner.go:195] Run: crio config
	I0729 12:16:38.601066  158316 cni.go:84] Creating CNI manager for ""
	I0729 12:16:38.601089  158316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:16:38.601098  158316 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:16:38.601117  158316 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-988528 NodeName:test-preload-988528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:16:38.601239  158316 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-988528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:16:38.601297  158316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0729 12:16:38.610760  158316 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:16:38.610843  158316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:16:38.619776  158316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0729 12:16:38.635102  158316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:16:38.650346  158316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0729 12:16:38.666249  158316 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0729 12:16:38.669895  158316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:16:38.681509  158316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:16:38.790449  158316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:16:38.807532  158316 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528 for IP: 192.168.39.195
	I0729 12:16:38.807558  158316 certs.go:194] generating shared ca certs ...
	I0729 12:16:38.807578  158316 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:16:38.807735  158316 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 12:16:38.807787  158316 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 12:16:38.807814  158316 certs.go:256] generating profile certs ...
	I0729 12:16:38.807923  158316 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/client.key
	I0729 12:16:38.807997  158316 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/apiserver.key.47b34a59
	I0729 12:16:38.808037  158316 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/proxy-client.key
	I0729 12:16:38.808153  158316 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 12:16:38.808181  158316 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 12:16:38.808190  158316 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 12:16:38.808209  158316 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:16:38.808234  158316 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:16:38.808253  158316 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 12:16:38.808288  158316 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:16:38.809028  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:16:38.837922  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:16:38.862472  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:16:38.894645  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:16:38.930632  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 12:16:38.974694  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 12:16:38.998761  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:16:39.021950  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 12:16:39.043943  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:16:39.065785  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 12:16:39.087800  158316 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 12:16:39.109725  158316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:16:39.125174  158316 ssh_runner.go:195] Run: openssl version
	I0729 12:16:39.130543  158316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 12:16:39.140530  158316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 12:16:39.144653  158316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:16:39.144710  158316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 12:16:39.150030  158316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 12:16:39.160558  158316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 12:16:39.171065  158316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 12:16:39.175288  158316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:16:39.175356  158316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 12:16:39.180675  158316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:16:39.190930  158316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:16:39.201398  158316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:16:39.205467  158316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:16:39.205519  158316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:16:39.210884  158316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:16:39.220882  158316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:16:39.225012  158316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:16:39.230630  158316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:16:39.236165  158316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:16:39.241705  158316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:16:39.247022  158316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:16:39.252559  158316 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:16:39.257932  158316 kubeadm.go:392] StartCluster: {Name:test-preload-988528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-988528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:16:39.258037  158316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:16:39.258089  158316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:16:39.291373  158316 cri.go:89] found id: ""
	I0729 12:16:39.291442  158316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 12:16:39.301570  158316 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 12:16:39.301596  158316 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 12:16:39.301654  158316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 12:16:39.311341  158316 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:16:39.311800  158316 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-988528" does not appear in /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:16:39.311900  158316 kubeconfig.go:62] /home/jenkins/minikube-integration/19336-113730/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-988528" cluster setting kubeconfig missing "test-preload-988528" context setting]
	I0729 12:16:39.312274  158316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:16:39.312897  158316 kapi.go:59] client config for test-preload-988528: &rest.Config{Host:"https://192.168.39.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 12:16:39.313574  158316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 12:16:39.323087  158316 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.195
	I0729 12:16:39.323126  158316 kubeadm.go:1160] stopping kube-system containers ...
	I0729 12:16:39.323143  158316 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 12:16:39.323209  158316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:16:39.357285  158316 cri.go:89] found id: ""
	I0729 12:16:39.357362  158316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 12:16:39.373189  158316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:16:39.382494  158316 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:16:39.382518  158316 kubeadm.go:157] found existing configuration files:
	
	I0729 12:16:39.382598  158316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:16:39.391387  158316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:16:39.391460  158316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:16:39.400383  158316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:16:39.409036  158316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:16:39.409106  158316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:16:39.418323  158316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:16:39.426991  158316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:16:39.427061  158316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:16:39.436007  158316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:16:39.444669  158316 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:16:39.444730  158316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:16:39.453913  158316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 12:16:39.462970  158316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:16:39.540237  158316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:16:40.351214  158316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:16:40.598025  158316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:16:40.662391  158316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:16:40.747280  158316 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:16:40.747386  158316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:16:41.247518  158316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:16:41.747757  158316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:16:41.764108  158316 api_server.go:72] duration metric: took 1.016829438s to wait for apiserver process to appear ...
	I0729 12:16:41.764144  158316 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:16:41.764169  158316 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0729 12:16:41.764713  158316 api_server.go:269] stopped: https://192.168.39.195:8443/healthz: Get "https://192.168.39.195:8443/healthz": dial tcp 192.168.39.195:8443: connect: connection refused
	I0729 12:16:42.264261  158316 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0729 12:16:45.455870  158316 api_server.go:279] https://192.168.39.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 12:16:45.455901  158316 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 12:16:45.455914  158316 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0729 12:16:45.503637  158316 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 12:16:45.503677  158316 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 12:16:45.765075  158316 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0729 12:16:45.773625  158316 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 12:16:45.773676  158316 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 12:16:46.265257  158316 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0729 12:16:46.270746  158316 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 12:16:46.270776  158316 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 12:16:46.764370  158316 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0729 12:16:46.770108  158316 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0729 12:16:46.779998  158316 api_server.go:141] control plane version: v1.24.4
	I0729 12:16:46.780036  158316 api_server.go:131] duration metric: took 5.015883536s to wait for apiserver health ...
	I0729 12:16:46.780049  158316 cni.go:84] Creating CNI manager for ""
	I0729 12:16:46.780057  158316 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:16:46.781947  158316 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 12:16:46.783116  158316 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 12:16:46.796161  158316 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 12:16:46.823390  158316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:16:46.835632  158316 system_pods.go:59] 7 kube-system pods found
	I0729 12:16:46.835671  158316 system_pods.go:61] "coredns-6d4b75cb6d-dcrxs" [b89fb75c-d950-4cee-a4d6-5a6a9df055b9] Running
	I0729 12:16:46.835678  158316 system_pods.go:61] "etcd-test-preload-988528" [f9d896e7-76be-4c80-a494-064a843f216d] Running
	I0729 12:16:46.835683  158316 system_pods.go:61] "kube-apiserver-test-preload-988528" [c5598f7d-d16a-4ae9-9265-5d046e2dcdbe] Running
	I0729 12:16:46.835697  158316 system_pods.go:61] "kube-controller-manager-test-preload-988528" [288cab2c-2810-4b47-8c15-c79f1c1c6244] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 12:16:46.835703  158316 system_pods.go:61] "kube-proxy-kkdbj" [3afdbf3e-da23-4d8c-bbd8-015e6d05c77e] Running
	I0729 12:16:46.835710  158316 system_pods.go:61] "kube-scheduler-test-preload-988528" [206be2b1-a37a-47e8-85c2-8c071b596f4c] Running
	I0729 12:16:46.835714  158316 system_pods.go:61] "storage-provisioner" [5335b864-c100-4d28-b174-ad1a5ecddf2d] Running
	I0729 12:16:46.835728  158316 system_pods.go:74] duration metric: took 12.302199ms to wait for pod list to return data ...
	I0729 12:16:46.835741  158316 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:16:46.839668  158316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:16:46.839699  158316 node_conditions.go:123] node cpu capacity is 2
	I0729 12:16:46.839713  158316 node_conditions.go:105] duration metric: took 3.966055ms to run NodePressure ...
	I0729 12:16:46.839734  158316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:16:47.081970  158316 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 12:16:47.085940  158316 kubeadm.go:739] kubelet initialised
	I0729 12:16:47.085964  158316 kubeadm.go:740] duration metric: took 3.967061ms waiting for restarted kubelet to initialise ...
	I0729 12:16:47.085974  158316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:16:47.092028  158316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dcrxs" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:47.106745  158316 pod_ready.go:97] node "test-preload-988528" hosting pod "coredns-6d4b75cb6d-dcrxs" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.106776  158316 pod_ready.go:81] duration metric: took 14.724336ms for pod "coredns-6d4b75cb6d-dcrxs" in "kube-system" namespace to be "Ready" ...
	E0729 12:16:47.106786  158316 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-988528" hosting pod "coredns-6d4b75cb6d-dcrxs" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.106793  158316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:47.119059  158316 pod_ready.go:97] node "test-preload-988528" hosting pod "etcd-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.119094  158316 pod_ready.go:81] duration metric: took 12.293415ms for pod "etcd-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	E0729 12:16:47.119106  158316 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-988528" hosting pod "etcd-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.119115  158316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:47.125219  158316 pod_ready.go:97] node "test-preload-988528" hosting pod "kube-apiserver-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.125259  158316 pod_ready.go:81] duration metric: took 6.131668ms for pod "kube-apiserver-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	E0729 12:16:47.125272  158316 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-988528" hosting pod "kube-apiserver-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.125282  158316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:47.228643  158316 pod_ready.go:97] node "test-preload-988528" hosting pod "kube-controller-manager-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.228673  158316 pod_ready.go:81] duration metric: took 103.376867ms for pod "kube-controller-manager-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	E0729 12:16:47.228685  158316 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-988528" hosting pod "kube-controller-manager-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.228694  158316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kkdbj" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:47.628577  158316 pod_ready.go:97] node "test-preload-988528" hosting pod "kube-proxy-kkdbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.628603  158316 pod_ready.go:81] duration metric: took 399.898166ms for pod "kube-proxy-kkdbj" in "kube-system" namespace to be "Ready" ...
	E0729 12:16:47.628611  158316 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-988528" hosting pod "kube-proxy-kkdbj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:47.628622  158316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:48.028335  158316 pod_ready.go:97] node "test-preload-988528" hosting pod "kube-scheduler-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:48.028369  158316 pod_ready.go:81] duration metric: took 399.739678ms for pod "kube-scheduler-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	E0729 12:16:48.028383  158316 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-988528" hosting pod "kube-scheduler-test-preload-988528" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:48.028399  158316 pod_ready.go:38] duration metric: took 942.406868ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:16:48.028424  158316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 12:16:48.039542  158316 ops.go:34] apiserver oom_adj: -16
	I0729 12:16:48.039568  158316 kubeadm.go:597] duration metric: took 8.737965515s to restartPrimaryControlPlane
	I0729 12:16:48.039577  158316 kubeadm.go:394] duration metric: took 8.781655623s to StartCluster
	I0729 12:16:48.039600  158316 settings.go:142] acquiring lock: {Name:mkb2a487c2f52476061a6d736b8e75563062eb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:16:48.039670  158316 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:16:48.040325  158316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:16:48.040552  158316 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:16:48.040637  158316 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 12:16:48.040758  158316 addons.go:69] Setting storage-provisioner=true in profile "test-preload-988528"
	I0729 12:16:48.040767  158316 addons.go:69] Setting default-storageclass=true in profile "test-preload-988528"
	I0729 12:16:48.040795  158316 addons.go:234] Setting addon storage-provisioner=true in "test-preload-988528"
	W0729 12:16:48.040804  158316 addons.go:243] addon storage-provisioner should already be in state true
	I0729 12:16:48.040839  158316 host.go:66] Checking if "test-preload-988528" exists ...
	I0729 12:16:48.040860  158316 config.go:182] Loaded profile config "test-preload-988528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 12:16:48.040804  158316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-988528"
	I0729 12:16:48.041179  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:48.041203  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:48.041256  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:48.041296  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:48.042658  158316 out.go:177] * Verifying Kubernetes components...
	I0729 12:16:48.044540  158316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:16:48.056212  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0729 12:16:48.056702  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:48.057297  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:48.057322  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:48.057678  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:48.058181  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:48.058227  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:48.058838  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0729 12:16:48.059201  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:48.059681  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:48.059705  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:48.060013  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:48.060193  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetState
	I0729 12:16:48.062525  158316 kapi.go:59] client config for test-preload-988528: &rest.Config{Host:"https://192.168.39.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/test-preload-988528/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 12:16:48.062858  158316 addons.go:234] Setting addon default-storageclass=true in "test-preload-988528"
	W0729 12:16:48.062877  158316 addons.go:243] addon default-storageclass should already be in state true
	I0729 12:16:48.062904  158316 host.go:66] Checking if "test-preload-988528" exists ...
	I0729 12:16:48.063265  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:48.063293  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:48.073079  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33535
	I0729 12:16:48.073555  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:48.074086  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:48.074109  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:48.074461  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:48.074695  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetState
	I0729 12:16:48.076504  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:48.078166  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36225
	I0729 12:16:48.078604  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:48.078839  158316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:16:48.079052  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:48.079071  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:48.079387  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:48.079971  158316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:16:48.080016  158316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:16:48.080268  158316 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:16:48.080290  158316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 12:16:48.080308  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:48.083089  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:48.083481  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:48.083510  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:48.083789  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:48.083975  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:48.084135  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:48.084274  158316 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa Username:docker}
	I0729 12:16:48.095437  158316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0729 12:16:48.095898  158316 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:16:48.096420  158316 main.go:141] libmachine: Using API Version  1
	I0729 12:16:48.096444  158316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:16:48.096761  158316 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:16:48.097004  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetState
	I0729 12:16:48.098627  158316 main.go:141] libmachine: (test-preload-988528) Calling .DriverName
	I0729 12:16:48.098859  158316 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 12:16:48.098877  158316 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 12:16:48.098897  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHHostname
	I0729 12:16:48.101592  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:48.102009  158316 main.go:141] libmachine: (test-preload-988528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:4b:f5", ip: ""} in network mk-test-preload-988528: {Iface:virbr1 ExpiryTime:2024-07-29 13:16:17 +0000 UTC Type:0 Mac:52:54:00:d5:4b:f5 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-988528 Clientid:01:52:54:00:d5:4b:f5}
	I0729 12:16:48.102042  158316 main.go:141] libmachine: (test-preload-988528) DBG | domain test-preload-988528 has defined IP address 192.168.39.195 and MAC address 52:54:00:d5:4b:f5 in network mk-test-preload-988528
	I0729 12:16:48.102185  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHPort
	I0729 12:16:48.102370  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHKeyPath
	I0729 12:16:48.102549  158316 main.go:141] libmachine: (test-preload-988528) Calling .GetSSHUsername
	I0729 12:16:48.102691  158316 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/test-preload-988528/id_rsa Username:docker}
	I0729 12:16:48.213047  158316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:16:48.231617  158316 node_ready.go:35] waiting up to 6m0s for node "test-preload-988528" to be "Ready" ...
	I0729 12:16:48.282137  158316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:16:48.302499  158316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 12:16:49.222186  158316 main.go:141] libmachine: Making call to close driver server
	I0729 12:16:49.222218  158316 main.go:141] libmachine: (test-preload-988528) Calling .Close
	I0729 12:16:49.222235  158316 main.go:141] libmachine: Making call to close driver server
	I0729 12:16:49.222247  158316 main.go:141] libmachine: (test-preload-988528) Calling .Close
	I0729 12:16:49.222548  158316 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:16:49.222571  158316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:16:49.222580  158316 main.go:141] libmachine: Making call to close driver server
	I0729 12:16:49.222581  158316 main.go:141] libmachine: (test-preload-988528) DBG | Closing plugin on server side
	I0729 12:16:49.222594  158316 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:16:49.222604  158316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:16:49.222611  158316 main.go:141] libmachine: Making call to close driver server
	I0729 12:16:49.222610  158316 main.go:141] libmachine: (test-preload-988528) DBG | Closing plugin on server side
	I0729 12:16:49.222619  158316 main.go:141] libmachine: (test-preload-988528) Calling .Close
	I0729 12:16:49.222588  158316 main.go:141] libmachine: (test-preload-988528) Calling .Close
	I0729 12:16:49.222843  158316 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:16:49.222846  158316 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:16:49.222852  158316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:16:49.222858  158316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:16:49.228004  158316 main.go:141] libmachine: Making call to close driver server
	I0729 12:16:49.228025  158316 main.go:141] libmachine: (test-preload-988528) Calling .Close
	I0729 12:16:49.228269  158316 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:16:49.228286  158316 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:16:49.228292  158316 main.go:141] libmachine: (test-preload-988528) DBG | Closing plugin on server side
	I0729 12:16:49.230359  158316 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 12:16:49.231802  158316 addons.go:510] duration metric: took 1.191172362s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 12:16:50.235213  158316 node_ready.go:53] node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:52.236353  158316 node_ready.go:53] node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:54.238205  158316 node_ready.go:53] node "test-preload-988528" has status "Ready":"False"
	I0729 12:16:56.235327  158316 node_ready.go:49] node "test-preload-988528" has status "Ready":"True"
	I0729 12:16:56.235353  158316 node_ready.go:38] duration metric: took 8.003703006s for node "test-preload-988528" to be "Ready" ...
	I0729 12:16:56.235361  158316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:16:56.240383  158316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dcrxs" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:56.244472  158316 pod_ready.go:92] pod "coredns-6d4b75cb6d-dcrxs" in "kube-system" namespace has status "Ready":"True"
	I0729 12:16:56.244492  158316 pod_ready.go:81] duration metric: took 4.08538ms for pod "coredns-6d4b75cb6d-dcrxs" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:56.244504  158316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.754637  158316 pod_ready.go:92] pod "etcd-test-preload-988528" in "kube-system" namespace has status "Ready":"True"
	I0729 12:16:57.754666  158316 pod_ready.go:81] duration metric: took 1.5101506s for pod "etcd-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.754675  158316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.759833  158316 pod_ready.go:92] pod "kube-apiserver-test-preload-988528" in "kube-system" namespace has status "Ready":"True"
	I0729 12:16:57.759855  158316 pod_ready.go:81] duration metric: took 5.174327ms for pod "kube-apiserver-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.759864  158316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.768450  158316 pod_ready.go:92] pod "kube-controller-manager-test-preload-988528" in "kube-system" namespace has status "Ready":"True"
	I0729 12:16:57.768471  158316 pod_ready.go:81] duration metric: took 8.600057ms for pod "kube-controller-manager-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.768479  158316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kkdbj" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.835889  158316 pod_ready.go:92] pod "kube-proxy-kkdbj" in "kube-system" namespace has status "Ready":"True"
	I0729 12:16:57.835913  158316 pod_ready.go:81] duration metric: took 67.427577ms for pod "kube-proxy-kkdbj" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:57.835922  158316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:16:59.841816  158316 pod_ready.go:102] pod "kube-scheduler-test-preload-988528" in "kube-system" namespace has status "Ready":"False"
	I0729 12:17:00.342167  158316 pod_ready.go:92] pod "kube-scheduler-test-preload-988528" in "kube-system" namespace has status "Ready":"True"
	I0729 12:17:00.342191  158316 pod_ready.go:81] duration metric: took 2.506262997s for pod "kube-scheduler-test-preload-988528" in "kube-system" namespace to be "Ready" ...
	I0729 12:17:00.342203  158316 pod_ready.go:38] duration metric: took 4.106831887s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:17:00.342217  158316 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:17:00.342266  158316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:17:00.355854  158316 api_server.go:72] duration metric: took 12.315269088s to wait for apiserver process to appear ...
	I0729 12:17:00.355877  158316 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:17:00.355897  158316 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0729 12:17:00.362678  158316 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0729 12:17:00.363782  158316 api_server.go:141] control plane version: v1.24.4
	I0729 12:17:00.363805  158316 api_server.go:131] duration metric: took 7.922109ms to wait for apiserver health ...
	I0729 12:17:00.363813  158316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:17:00.442136  158316 system_pods.go:59] 7 kube-system pods found
	I0729 12:17:00.442165  158316 system_pods.go:61] "coredns-6d4b75cb6d-dcrxs" [b89fb75c-d950-4cee-a4d6-5a6a9df055b9] Running
	I0729 12:17:00.442169  158316 system_pods.go:61] "etcd-test-preload-988528" [f9d896e7-76be-4c80-a494-064a843f216d] Running
	I0729 12:17:00.442173  158316 system_pods.go:61] "kube-apiserver-test-preload-988528" [c5598f7d-d16a-4ae9-9265-5d046e2dcdbe] Running
	I0729 12:17:00.442176  158316 system_pods.go:61] "kube-controller-manager-test-preload-988528" [288cab2c-2810-4b47-8c15-c79f1c1c6244] Running
	I0729 12:17:00.442179  158316 system_pods.go:61] "kube-proxy-kkdbj" [3afdbf3e-da23-4d8c-bbd8-015e6d05c77e] Running
	I0729 12:17:00.442181  158316 system_pods.go:61] "kube-scheduler-test-preload-988528" [206be2b1-a37a-47e8-85c2-8c071b596f4c] Running
	I0729 12:17:00.442187  158316 system_pods.go:61] "storage-provisioner" [5335b864-c100-4d28-b174-ad1a5ecddf2d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 12:17:00.442194  158316 system_pods.go:74] duration metric: took 78.375287ms to wait for pod list to return data ...
	I0729 12:17:00.442202  158316 default_sa.go:34] waiting for default service account to be created ...
	I0729 12:17:00.635341  158316 default_sa.go:45] found service account: "default"
	I0729 12:17:00.635373  158316 default_sa.go:55] duration metric: took 193.162629ms for default service account to be created ...
	I0729 12:17:00.635384  158316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 12:17:00.839069  158316 system_pods.go:86] 7 kube-system pods found
	I0729 12:17:00.839114  158316 system_pods.go:89] "coredns-6d4b75cb6d-dcrxs" [b89fb75c-d950-4cee-a4d6-5a6a9df055b9] Running
	I0729 12:17:00.839124  158316 system_pods.go:89] "etcd-test-preload-988528" [f9d896e7-76be-4c80-a494-064a843f216d] Running
	I0729 12:17:00.839131  158316 system_pods.go:89] "kube-apiserver-test-preload-988528" [c5598f7d-d16a-4ae9-9265-5d046e2dcdbe] Running
	I0729 12:17:00.839146  158316 system_pods.go:89] "kube-controller-manager-test-preload-988528" [288cab2c-2810-4b47-8c15-c79f1c1c6244] Running
	I0729 12:17:00.839153  158316 system_pods.go:89] "kube-proxy-kkdbj" [3afdbf3e-da23-4d8c-bbd8-015e6d05c77e] Running
	I0729 12:17:00.839159  158316 system_pods.go:89] "kube-scheduler-test-preload-988528" [206be2b1-a37a-47e8-85c2-8c071b596f4c] Running
	I0729 12:17:00.839169  158316 system_pods.go:89] "storage-provisioner" [5335b864-c100-4d28-b174-ad1a5ecddf2d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 12:17:00.839188  158316 system_pods.go:126] duration metric: took 203.79624ms to wait for k8s-apps to be running ...
	I0729 12:17:00.839207  158316 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 12:17:00.839269  158316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:17:00.872116  158316 system_svc.go:56] duration metric: took 32.877134ms WaitForService to wait for kubelet
	I0729 12:17:00.872155  158316 kubeadm.go:582] duration metric: took 12.831572667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:17:00.872184  158316 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:17:01.036552  158316 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:17:01.036580  158316 node_conditions.go:123] node cpu capacity is 2
	I0729 12:17:01.036590  158316 node_conditions.go:105] duration metric: took 164.400244ms to run NodePressure ...
	I0729 12:17:01.036600  158316 start.go:241] waiting for startup goroutines ...
	I0729 12:17:01.036606  158316 start.go:246] waiting for cluster config update ...
	I0729 12:17:01.036615  158316 start.go:255] writing updated cluster config ...
	I0729 12:17:01.036870  158316 ssh_runner.go:195] Run: rm -f paused
	I0729 12:17:01.083543  158316 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0729 12:17:01.085399  158316 out.go:177] 
	W0729 12:17:01.086767  158316 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0729 12:17:01.087881  158316 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0729 12:17:01.088998  158316 out.go:177] * Done! kubectl is now configured to use "test-preload-988528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 12:17:01 test-preload-988528 crio[700]: time="2024-07-29 12:17:01.959289020Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255421959260826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cd2a56a-dd9d-4590-9019-0d5315cd015d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:17:01 test-preload-988528 crio[700]: time="2024-07-29 12:17:01.960870028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fab1fc2-b4fe-4a9b-86ad-fd2c07cd7b0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:01 test-preload-988528 crio[700]: time="2024-07-29 12:17:01.960936929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fab1fc2-b4fe-4a9b-86ad-fd2c07cd7b0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:01 test-preload-988528 crio[700]: time="2024-07-29 12:17:01.961144288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65aa81952efac4d917e020b03af4a558d9b61cd93506d2101bbe3d0b29c97dd1,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255420816965770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7913ee05092ac673bee6102061587fead8c2d2873e022bd9e62e35bb9a5ce265,PodSandboxId:1eb72f3bfa82804c5ebac1327de390a88417b8660e850a912769dbea82722d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722255414106070515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dcrxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89fb75c-d950-4cee-a4d6-5a6a9df055b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c67e32f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d28e3ebff9225eae4923c6e90b89c962cd49030af20474ec88d64830781524c,PodSandboxId:e11deac0f0060db2d3467172fe50f46f6086205cc9a7e85534779ed3cc206139,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722255407080133458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a
fdbf3e-da23-4d8c-bbd8-015e6d05c77e,},Annotations:map[string]string{io.kubernetes.container.hash: 2b6202e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255406841782148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4
d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8707d72b28b35df1615f8f4df5e9e2d5d5d8806db2150007d6349c7743b1f67,PodSandboxId:5116452073e86b2e70c5428521f2d10db495ead7487944df8a41f1e2f5b2d0da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722255401460459547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6759b88cde7c0120e3db3194c067d3ff,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 39a1dff9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0d0105a380e56e7f7ef27e3cbca69e89d18e2824a973c44c6ff8df98ac8f08,PodSandboxId:e2357d682382c1fe5a34363acb0659dbcf48fceadffa5ad9bfc028605e9f6f97,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722255401439539709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb299f2b76a68edaaf1e441ff5cbc4f,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208730a67373d8ced0df0e6db60507bc851fe38d6c077173ed9d62d3ca5ff991,PodSandboxId:83350e26865b994b60a8f667230f9474bebb0f5a864a88eb84537c560e6848ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722255401412022766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a532fdfc8d1b51b55d38a42fa2191e3,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cb5469d5cdbf21cb5d13d4f5e4508efa7b921484c8376e95bd0c1c03686096,PodSandboxId:83aebfca884a33a1e1b615dc09cd2429d6ce3bbe3b781223adfe07895edbc602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722255401394694596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394c3b14e33edf6321bbbf54f9f3a94e,},Annotations:map[string]
string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fab1fc2-b4fe-4a9b-86ad-fd2c07cd7b0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:01 test-preload-988528 crio[700]: time="2024-07-29 12:17:01.998054458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6e08f90-8207-49c4-97d3-960cb031e3bb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:17:01 test-preload-988528 crio[700]: time="2024-07-29 12:17:01.998143885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6e08f90-8207-49c4-97d3-960cb031e3bb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:17:01 test-preload-988528 crio[700]: time="2024-07-29 12:17:01.999568352Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24054a0e-3ba4-4c21-b3be-8afa9b67bb2c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.000016519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255421999988223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24054a0e-3ba4-4c21-b3be-8afa9b67bb2c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.000564492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4aee3e49-8cdb-49bd-92e1-e1f719465200 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.000629899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4aee3e49-8cdb-49bd-92e1-e1f719465200 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.000811006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65aa81952efac4d917e020b03af4a558d9b61cd93506d2101bbe3d0b29c97dd1,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255420816965770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7913ee05092ac673bee6102061587fead8c2d2873e022bd9e62e35bb9a5ce265,PodSandboxId:1eb72f3bfa82804c5ebac1327de390a88417b8660e850a912769dbea82722d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722255414106070515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dcrxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89fb75c-d950-4cee-a4d6-5a6a9df055b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c67e32f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d28e3ebff9225eae4923c6e90b89c962cd49030af20474ec88d64830781524c,PodSandboxId:e11deac0f0060db2d3467172fe50f46f6086205cc9a7e85534779ed3cc206139,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722255407080133458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a
fdbf3e-da23-4d8c-bbd8-015e6d05c77e,},Annotations:map[string]string{io.kubernetes.container.hash: 2b6202e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255406841782148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4
d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8707d72b28b35df1615f8f4df5e9e2d5d5d8806db2150007d6349c7743b1f67,PodSandboxId:5116452073e86b2e70c5428521f2d10db495ead7487944df8a41f1e2f5b2d0da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722255401460459547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6759b88cde7c0120e3db3194c067d3ff,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 39a1dff9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0d0105a380e56e7f7ef27e3cbca69e89d18e2824a973c44c6ff8df98ac8f08,PodSandboxId:e2357d682382c1fe5a34363acb0659dbcf48fceadffa5ad9bfc028605e9f6f97,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722255401439539709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb299f2b76a68edaaf1e441ff5cbc4f,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208730a67373d8ced0df0e6db60507bc851fe38d6c077173ed9d62d3ca5ff991,PodSandboxId:83350e26865b994b60a8f667230f9474bebb0f5a864a88eb84537c560e6848ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722255401412022766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a532fdfc8d1b51b55d38a42fa2191e3,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cb5469d5cdbf21cb5d13d4f5e4508efa7b921484c8376e95bd0c1c03686096,PodSandboxId:83aebfca884a33a1e1b615dc09cd2429d6ce3bbe3b781223adfe07895edbc602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722255401394694596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394c3b14e33edf6321bbbf54f9f3a94e,},Annotations:map[string]
string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4aee3e49-8cdb-49bd-92e1-e1f719465200 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.036191155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f830bfb1-019f-4138-aeb3-b93a24fd27ff name=/runtime.v1.RuntimeService/Version
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.036290973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f830bfb1-019f-4138-aeb3-b93a24fd27ff name=/runtime.v1.RuntimeService/Version
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.037849333Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbf56cd3-ca5a-49a6-a497-cb0f0cc54044 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.038295348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255422038272970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbf56cd3-ca5a-49a6-a497-cb0f0cc54044 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.038908770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6db062be-4a76-4a25-bcdf-8ae2d51c0e52 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.038963212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6db062be-4a76-4a25-bcdf-8ae2d51c0e52 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.039154541Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65aa81952efac4d917e020b03af4a558d9b61cd93506d2101bbe3d0b29c97dd1,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255420816965770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7913ee05092ac673bee6102061587fead8c2d2873e022bd9e62e35bb9a5ce265,PodSandboxId:1eb72f3bfa82804c5ebac1327de390a88417b8660e850a912769dbea82722d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722255414106070515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dcrxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89fb75c-d950-4cee-a4d6-5a6a9df055b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c67e32f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d28e3ebff9225eae4923c6e90b89c962cd49030af20474ec88d64830781524c,PodSandboxId:e11deac0f0060db2d3467172fe50f46f6086205cc9a7e85534779ed3cc206139,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722255407080133458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a
fdbf3e-da23-4d8c-bbd8-015e6d05c77e,},Annotations:map[string]string{io.kubernetes.container.hash: 2b6202e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255406841782148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4
d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8707d72b28b35df1615f8f4df5e9e2d5d5d8806db2150007d6349c7743b1f67,PodSandboxId:5116452073e86b2e70c5428521f2d10db495ead7487944df8a41f1e2f5b2d0da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722255401460459547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6759b88cde7c0120e3db3194c067d3ff,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 39a1dff9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0d0105a380e56e7f7ef27e3cbca69e89d18e2824a973c44c6ff8df98ac8f08,PodSandboxId:e2357d682382c1fe5a34363acb0659dbcf48fceadffa5ad9bfc028605e9f6f97,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722255401439539709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb299f2b76a68edaaf1e441ff5cbc4f,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208730a67373d8ced0df0e6db60507bc851fe38d6c077173ed9d62d3ca5ff991,PodSandboxId:83350e26865b994b60a8f667230f9474bebb0f5a864a88eb84537c560e6848ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722255401412022766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a532fdfc8d1b51b55d38a42fa2191e3,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cb5469d5cdbf21cb5d13d4f5e4508efa7b921484c8376e95bd0c1c03686096,PodSandboxId:83aebfca884a33a1e1b615dc09cd2429d6ce3bbe3b781223adfe07895edbc602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722255401394694596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394c3b14e33edf6321bbbf54f9f3a94e,},Annotations:map[string]
string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6db062be-4a76-4a25-bcdf-8ae2d51c0e52 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.069663785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed924db5-d8d5-40ae-97ea-0f17fdd3aeb1 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.069751538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed924db5-d8d5-40ae-97ea-0f17fdd3aeb1 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.071271848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf89a98a-67ec-4419-8864-f749dded830c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.071785406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255422071763804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf89a98a-67ec-4419-8864-f749dded830c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.072252670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccfddac1-cf79-4c12-ab6a-8abaf3200bb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.072374636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccfddac1-cf79-4c12-ab6a-8abaf3200bb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:17:02 test-preload-988528 crio[700]: time="2024-07-29 12:17:02.072568088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65aa81952efac4d917e020b03af4a558d9b61cd93506d2101bbe3d0b29c97dd1,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255420816965770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7913ee05092ac673bee6102061587fead8c2d2873e022bd9e62e35bb9a5ce265,PodSandboxId:1eb72f3bfa82804c5ebac1327de390a88417b8660e850a912769dbea82722d4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722255414106070515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dcrxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b89fb75c-d950-4cee-a4d6-5a6a9df055b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c67e32f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d28e3ebff9225eae4923c6e90b89c962cd49030af20474ec88d64830781524c,PodSandboxId:e11deac0f0060db2d3467172fe50f46f6086205cc9a7e85534779ed3cc206139,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722255407080133458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kkdbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a
fdbf3e-da23-4d8c-bbd8-015e6d05c77e,},Annotations:map[string]string{io.kubernetes.container.hash: 2b6202e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b,PodSandboxId:29fadaf28b07d749a439ead665eea27ffcf1dd6abb8da5edac09172a6cd5e82e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255406841782148,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5335b864-c100-4
d28-b174-ad1a5ecddf2d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7a7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8707d72b28b35df1615f8f4df5e9e2d5d5d8806db2150007d6349c7743b1f67,PodSandboxId:5116452073e86b2e70c5428521f2d10db495ead7487944df8a41f1e2f5b2d0da,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722255401460459547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6759b88cde7c0120e3db3194c067d3ff,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 39a1dff9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0d0105a380e56e7f7ef27e3cbca69e89d18e2824a973c44c6ff8df98ac8f08,PodSandboxId:e2357d682382c1fe5a34363acb0659dbcf48fceadffa5ad9bfc028605e9f6f97,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722255401439539709,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdb299f2b76a68edaaf1e441ff5cbc4f,},A
nnotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208730a67373d8ced0df0e6db60507bc851fe38d6c077173ed9d62d3ca5ff991,PodSandboxId:83350e26865b994b60a8f667230f9474bebb0f5a864a88eb84537c560e6848ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722255401412022766,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a532fdfc8d1b51b55d38a42fa2191e3,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cb5469d5cdbf21cb5d13d4f5e4508efa7b921484c8376e95bd0c1c03686096,PodSandboxId:83aebfca884a33a1e1b615dc09cd2429d6ce3bbe3b781223adfe07895edbc602,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722255401394694596,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 394c3b14e33edf6321bbbf54f9f3a94e,},Annotations:map[string]
string{io.kubernetes.container.hash: cdddbf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccfddac1-cf79-4c12-ab6a-8abaf3200bb8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	65aa81952efac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   1 second ago        Running             storage-provisioner       3                   29fadaf28b07d       storage-provisioner
	7913ee05092ac       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   1eb72f3bfa828       coredns-6d4b75cb6d-dcrxs
	5d28e3ebff922       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   e11deac0f0060       kube-proxy-kkdbj
	acef723503d76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       2                   29fadaf28b07d       storage-provisioner
	a8707d72b28b3       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   5116452073e86       etcd-test-preload-988528
	9b0d0105a380e       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   e2357d682382c       kube-controller-manager-test-preload-988528
	208730a67373d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   83350e26865b9       kube-scheduler-test-preload-988528
	35cb5469d5cdb       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   83aebfca884a3       kube-apiserver-test-preload-988528
	
	
	==> coredns [7913ee05092ac673bee6102061587fead8c2d2873e022bd9e62e35bb9a5ce265] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:48660 - 22733 "HINFO IN 2503016321512156957.1030628350799739381. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019091129s
	
	
	==> describe nodes <==
	Name:               test-preload-988528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-988528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=test-preload-988528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_15_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:15:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-988528
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:16:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:16:55 +0000   Mon, 29 Jul 2024 12:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:16:55 +0000   Mon, 29 Jul 2024 12:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:16:55 +0000   Mon, 29 Jul 2024 12:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:16:55 +0000   Mon, 29 Jul 2024 12:16:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    test-preload-988528
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 043dcbf8ac1948ebb1755be28e1fd12f
	  System UUID:                043dcbf8-ac19-48eb-b175-5be28e1fd12f
	  Boot ID:                    13b57851-e5d4-42a1-b588-891e48046c43
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dcrxs                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     73s
	  kube-system                 etcd-test-preload-988528                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-test-preload-988528             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-controller-manager-test-preload-988528    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-proxy-kkdbj                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-scheduler-test-preload-988528             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 71s                kube-proxy       
	  Normal  NodeHasSufficientMemory  94s (x5 over 94s)  kubelet          Node test-preload-988528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s (x5 over 94s)  kubelet          Node test-preload-988528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s (x5 over 94s)  kubelet          Node test-preload-988528 status is now: NodeHasSufficientPID
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s                kubelet          Node test-preload-988528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet          Node test-preload-988528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet          Node test-preload-988528 status is now: NodeHasSufficientPID
	  Normal  NodeReady                75s                kubelet          Node test-preload-988528 status is now: NodeReady
	  Normal  RegisteredNode           74s                node-controller  Node test-preload-988528 event: Registered Node test-preload-988528 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-988528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-988528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-988528 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-988528 event: Registered Node test-preload-988528 in Controller
	
	
	==> dmesg <==
	[Jul29 12:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047316] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035590] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.647869] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.624461] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.376038] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.326368] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.057662] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050004] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.185319] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.116845] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.256497] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[ +13.104177] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.054533] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.736397] systemd-fstab-generator[1084]: Ignoring "noauto" option for root device
	[  +6.151687] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.446862] systemd-fstab-generator[1772]: Ignoring "noauto" option for root device
	[  +5.810101] kauditd_printk_skb: 65 callbacks suppressed
	[Jul29 12:17] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [a8707d72b28b35df1615f8f4df5e9e2d5d5d8806db2150007d6349c7743b1f67] <==
	{"level":"info","ts":"2024-07-29T12:16:41.706Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"324857e3fe6e5c62","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T12:16:41.709Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T12:16:41.709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 switched to configuration voters=(3623242536957402210)"}
	{"level":"info","ts":"2024-07-29T12:16:41.710Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","added-peer-id":"324857e3fe6e5c62","added-peer-peer-urls":["https://192.168.39.195:2380"]}
	{"level":"info","ts":"2024-07-29T12:16:41.710Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e260bcd32c6c8b35","local-member-id":"324857e3fe6e5c62","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:16:41.710Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:16:41.716Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:16:41.716Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-29T12:16:41.716Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.195:2380"}
	{"level":"info","ts":"2024-07-29T12:16:41.716Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"324857e3fe6e5c62","initial-advertise-peer-urls":["https://192.168.39.195:2380"],"listen-peer-urls":["https://192.168.39.195:2380"],"advertise-client-urls":["https://192.168.39.195:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.195:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T12:16:41.717Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T12:16:43.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T12:16:43.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T12:16:43.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgPreVoteResp from 324857e3fe6e5c62 at term 2"}
	{"level":"info","ts":"2024-07-29T12:16:43.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T12:16:43.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 received MsgVoteResp from 324857e3fe6e5c62 at term 3"}
	{"level":"info","ts":"2024-07-29T12:16:43.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"324857e3fe6e5c62 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T12:16:43.092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 324857e3fe6e5c62 elected leader 324857e3fe6e5c62 at term 3"}
	{"level":"info","ts":"2024-07-29T12:16:43.097Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"324857e3fe6e5c62","local-member-attributes":"{Name:test-preload-988528 ClientURLs:[https://192.168.39.195:2379]}","request-path":"/0/members/324857e3fe6e5c62/attributes","cluster-id":"e260bcd32c6c8b35","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:16:43.097Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:16:43.097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:16:43.097Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:16:43.098Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:16:43.099Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T12:16:43.099Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.195:2379"}
	
	
	==> kernel <==
	 12:17:02 up 0 min,  0 users,  load average: 0.55, 0.17, 0.06
	Linux test-preload-988528 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35cb5469d5cdbf21cb5d13d4f5e4508efa7b921484c8376e95bd0c1c03686096] <==
	I0729 12:16:45.371293       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 12:16:45.371352       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 12:16:45.371949       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0729 12:16:45.379010       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0729 12:16:45.419711       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 12:16:45.438719       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 12:16:45.484443       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0729 12:16:45.493210       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0729 12:16:45.517454       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 12:16:45.563384       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:16:45.564172       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:16:45.564830       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:16:45.567538       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 12:16:45.567577       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 12:16:45.571136       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0729 12:16:46.070504       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 12:16:46.370648       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 12:16:46.976626       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 12:16:46.991935       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 12:16:47.030567       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 12:16:47.052664       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 12:16:47.061554       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 12:16:47.310562       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0729 12:16:57.892429       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 12:16:58.042554       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9b0d0105a380e56e7f7ef27e3cbca69e89d18e2824a973c44c6ff8df98ac8f08] <==
	I0729 12:16:57.781990       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0729 12:16:57.783201       1 shared_informer.go:262] Caches are synced for PV protection
	I0729 12:16:57.785552       1 shared_informer.go:262] Caches are synced for expand
	I0729 12:16:57.791963       1 shared_informer.go:262] Caches are synced for HPA
	I0729 12:16:57.793412       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0729 12:16:57.794776       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 12:16:57.795943       1 shared_informer.go:262] Caches are synced for disruption
	I0729 12:16:57.795968       1 disruption.go:371] Sending events to api server.
	I0729 12:16:57.797207       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 12:16:57.797259       1 shared_informer.go:262] Caches are synced for job
	I0729 12:16:57.799721       1 shared_informer.go:262] Caches are synced for endpoint
	I0729 12:16:57.799739       1 shared_informer.go:262] Caches are synced for deployment
	I0729 12:16:57.801407       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0729 12:16:57.805265       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0729 12:16:57.808217       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0729 12:16:57.828499       1 shared_informer.go:262] Caches are synced for ephemeral
	I0729 12:16:57.831816       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0729 12:16:57.903758       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 12:16:57.924066       1 shared_informer.go:262] Caches are synced for namespace
	I0729 12:16:57.959224       1 shared_informer.go:262] Caches are synced for service account
	I0729 12:16:57.984291       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 12:16:58.020849       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 12:16:58.420254       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 12:16:58.420402       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 12:16:58.477465       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [5d28e3ebff9225eae4923c6e90b89c962cd49030af20474ec88d64830781524c] <==
	I0729 12:16:47.266425       1 node.go:163] Successfully retrieved node IP: 192.168.39.195
	I0729 12:16:47.266483       1 server_others.go:138] "Detected node IP" address="192.168.39.195"
	I0729 12:16:47.266528       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 12:16:47.303438       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 12:16:47.303463       1 server_others.go:206] "Using iptables Proxier"
	I0729 12:16:47.303869       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 12:16:47.304163       1 server.go:661] "Version info" version="v1.24.4"
	I0729 12:16:47.304187       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:16:47.305481       1 config.go:317] "Starting service config controller"
	I0729 12:16:47.305520       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 12:16:47.305567       1 config.go:226] "Starting endpoint slice config controller"
	I0729 12:16:47.305583       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 12:16:47.307449       1 config.go:444] "Starting node config controller"
	I0729 12:16:47.307492       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 12:16:47.405704       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 12:16:47.405761       1 shared_informer.go:262] Caches are synced for service config
	I0729 12:16:47.407583       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [208730a67373d8ced0df0e6db60507bc851fe38d6c077173ed9d62d3ca5ff991] <==
	I0729 12:16:42.241862       1 serving.go:348] Generated self-signed cert in-memory
	W0729 12:16:45.455294       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 12:16:45.456111       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:16:45.456214       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 12:16:45.456242       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 12:16:45.493064       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0729 12:16:45.493916       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:16:45.497436       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0729 12:16:45.506111       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 12:16:45.506168       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 12:16:45.506233       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:16:45.606521       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: I0729 12:16:46.179261    1091 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c64df13-fd9f-4dff-913e-e22590c38bfd-config-volume\") pod \"8c64df13-fd9f-4dff-913e-e22590c38bfd\" (UID: \"8c64df13-fd9f-4dff-913e-e22590c38bfd\") "
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: E0729 12:16:46.180270    1091 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: E0729 12:16:46.180379    1091 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume podName:b89fb75c-d950-4cee-a4d6-5a6a9df055b9 nodeName:}" failed. No retries permitted until 2024-07-29 12:16:46.680351802 +0000 UTC m=+6.091724237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume") pod "coredns-6d4b75cb6d-dcrxs" (UID: "b89fb75c-d950-4cee-a4d6-5a6a9df055b9") : object "kube-system"/"coredns" not registered
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: W0729 12:16:46.181382    1091 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/8c64df13-fd9f-4dff-913e-e22590c38bfd/volumes/kubernetes.io~projected/kube-api-access-2gnpx: clearQuota called, but quotas disabled
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: W0729 12:16:46.181389    1091 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/8c64df13-fd9f-4dff-913e-e22590c38bfd/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: I0729 12:16:46.181675    1091 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c64df13-fd9f-4dff-913e-e22590c38bfd-kube-api-access-2gnpx" (OuterVolumeSpecName: "kube-api-access-2gnpx") pod "8c64df13-fd9f-4dff-913e-e22590c38bfd" (UID: "8c64df13-fd9f-4dff-913e-e22590c38bfd"). InnerVolumeSpecName "kube-api-access-2gnpx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: I0729 12:16:46.181957    1091 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c64df13-fd9f-4dff-913e-e22590c38bfd-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c64df13-fd9f-4dff-913e-e22590c38bfd" (UID: "8c64df13-fd9f-4dff-913e-e22590c38bfd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: I0729 12:16:46.280610    1091 reconciler.go:384] "Volume detached for volume \"kube-api-access-2gnpx\" (UniqueName: \"kubernetes.io/projected/8c64df13-fd9f-4dff-913e-e22590c38bfd-kube-api-access-2gnpx\") on node \"test-preload-988528\" DevicePath \"\""
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: I0729 12:16:46.280744    1091 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c64df13-fd9f-4dff-913e-e22590c38bfd-config-volume\") on node \"test-preload-988528\" DevicePath \"\""
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: E0729 12:16:46.684901    1091 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: E0729 12:16:46.684976    1091 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume podName:b89fb75c-d950-4cee-a4d6-5a6a9df055b9 nodeName:}" failed. No retries permitted until 2024-07-29 12:16:47.684958893 +0000 UTC m=+7.096331328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume") pod "coredns-6d4b75cb6d-dcrxs" (UID: "b89fb75c-d950-4cee-a4d6-5a6a9df055b9") : object "kube-system"/"coredns" not registered
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: I0729 12:16:46.808728    1091 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8c64df13-fd9f-4dff-913e-e22590c38bfd path="/var/lib/kubelet/pods/8c64df13-fd9f-4dff-913e-e22590c38bfd/volumes"
	Jul 29 12:16:46 test-preload-988528 kubelet[1091]: I0729 12:16:46.834980    1091 scope.go:110] "RemoveContainer" containerID="c6fd76d3feca17204fe0e5a01f5f5de53a614bde57014265767d3f8be3e1362a"
	Jul 29 12:16:47 test-preload-988528 kubelet[1091]: E0729 12:16:47.692184    1091 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 12:16:47 test-preload-988528 kubelet[1091]: E0729 12:16:47.692255    1091 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume podName:b89fb75c-d950-4cee-a4d6-5a6a9df055b9 nodeName:}" failed. No retries permitted until 2024-07-29 12:16:49.692241057 +0000 UTC m=+9.103613490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume") pod "coredns-6d4b75cb6d-dcrxs" (UID: "b89fb75c-d950-4cee-a4d6-5a6a9df055b9") : object "kube-system"/"coredns" not registered
	Jul 29 12:16:47 test-preload-988528 kubelet[1091]: E0729 12:16:47.800906    1091 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dcrxs" podUID=b89fb75c-d950-4cee-a4d6-5a6a9df055b9
	Jul 29 12:16:47 test-preload-988528 kubelet[1091]: I0729 12:16:47.839665    1091 scope.go:110] "RemoveContainer" containerID="c6fd76d3feca17204fe0e5a01f5f5de53a614bde57014265767d3f8be3e1362a"
	Jul 29 12:16:47 test-preload-988528 kubelet[1091]: I0729 12:16:47.839905    1091 scope.go:110] "RemoveContainer" containerID="acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b"
	Jul 29 12:16:47 test-preload-988528 kubelet[1091]: E0729 12:16:47.840037    1091 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5335b864-c100-4d28-b174-ad1a5ecddf2d)\"" pod="kube-system/storage-provisioner" podUID=5335b864-c100-4d28-b174-ad1a5ecddf2d
	Jul 29 12:16:48 test-preload-988528 kubelet[1091]: I0729 12:16:48.849979    1091 scope.go:110] "RemoveContainer" containerID="acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b"
	Jul 29 12:16:48 test-preload-988528 kubelet[1091]: E0729 12:16:48.850108    1091 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5335b864-c100-4d28-b174-ad1a5ecddf2d)\"" pod="kube-system/storage-provisioner" podUID=5335b864-c100-4d28-b174-ad1a5ecddf2d
	Jul 29 12:16:49 test-preload-988528 kubelet[1091]: E0729 12:16:49.709120    1091 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 12:16:49 test-preload-988528 kubelet[1091]: E0729 12:16:49.709366    1091 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume podName:b89fb75c-d950-4cee-a4d6-5a6a9df055b9 nodeName:}" failed. No retries permitted until 2024-07-29 12:16:53.709343489 +0000 UTC m=+13.120715925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b89fb75c-d950-4cee-a4d6-5a6a9df055b9-config-volume") pod "coredns-6d4b75cb6d-dcrxs" (UID: "b89fb75c-d950-4cee-a4d6-5a6a9df055b9") : object "kube-system"/"coredns" not registered
	Jul 29 12:16:49 test-preload-988528 kubelet[1091]: E0729 12:16:49.801470    1091 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dcrxs" podUID=b89fb75c-d950-4cee-a4d6-5a6a9df055b9
	Jul 29 12:17:00 test-preload-988528 kubelet[1091]: I0729 12:17:00.801664    1091 scope.go:110] "RemoveContainer" containerID="acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b"
	
	
	==> storage-provisioner [65aa81952efac4d917e020b03af4a558d9b61cd93506d2101bbe3d0b29c97dd1] <==
	I0729 12:17:00.909960       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 12:17:00.926618       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 12:17:00.927100       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [acef723503d76523023e79fc54db081c9b3820a8b03a3d8230736f753c56347b] <==
	I0729 12:16:46.931187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 12:16:46.937958       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-988528 -n test-preload-988528
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-988528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-988528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-988528
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-988528: (1.136597615s)
--- FAIL: TestPreload (167.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (379.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m54.694733508s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-714444] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-714444" primary control-plane node in "kubernetes-upgrade-714444" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:18:54.772442  160242 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:18:54.772580  160242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:18:54.772592  160242 out.go:304] Setting ErrFile to fd 2...
	I0729 12:18:54.772598  160242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:18:54.772838  160242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:18:54.774296  160242 out.go:298] Setting JSON to false
	I0729 12:18:54.775297  160242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7286,"bootTime":1722248249,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:18:54.775391  160242 start.go:139] virtualization: kvm guest
	I0729 12:18:54.777295  160242 out.go:177] * [kubernetes-upgrade-714444] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:18:54.779539  160242 notify.go:220] Checking for updates...
	I0729 12:18:54.780811  160242 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:18:54.783996  160242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:18:54.786982  160242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:18:54.789155  160242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:18:54.791823  160242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:18:54.794426  160242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:18:54.795956  160242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:18:54.838602  160242 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 12:18:54.839829  160242 start.go:297] selected driver: kvm2
	I0729 12:18:54.839842  160242 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:18:54.839852  160242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:18:54.840828  160242 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:18:54.858006  160242 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:18:54.876552  160242 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:18:54.876620  160242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:18:54.876894  160242 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:18:54.876944  160242 cni.go:84] Creating CNI manager for ""
	I0729 12:18:54.876953  160242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:18:54.876983  160242 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:18:54.877064  160242 start.go:340] cluster config:
	{Name:kubernetes-upgrade-714444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:18:54.877186  160242 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:18:54.878741  160242 out.go:177] * Starting "kubernetes-upgrade-714444" primary control-plane node in "kubernetes-upgrade-714444" cluster
	I0729 12:18:54.880016  160242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 12:18:54.880057  160242 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 12:18:54.880067  160242 cache.go:56] Caching tarball of preloaded images
	I0729 12:18:54.880155  160242 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:18:54.880171  160242 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 12:18:54.880567  160242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/config.json ...
	I0729 12:18:54.880602  160242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/config.json: {Name:mkd9fa40596e377fc43fe9a6130fa5323796734e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:18:54.880775  160242 start.go:360] acquireMachinesLock for kubernetes-upgrade-714444: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:19:18.793562  160242 start.go:364] duration metric: took 23.912757957s to acquireMachinesLock for "kubernetes-upgrade-714444"
	I0729 12:19:18.793634  160242 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-714444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:19:18.793742  160242 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 12:19:18.795843  160242 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 12:19:18.796053  160242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:19:18.796113  160242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:19:18.812759  160242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I0729 12:19:18.813280  160242 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:19:18.813826  160242 main.go:141] libmachine: Using API Version  1
	I0729 12:19:18.813851  160242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:19:18.814169  160242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:19:18.814338  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:19:18.814543  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:18.814726  160242 start.go:159] libmachine.API.Create for "kubernetes-upgrade-714444" (driver="kvm2")
	I0729 12:19:18.814760  160242 client.go:168] LocalClient.Create starting
	I0729 12:19:18.814798  160242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 12:19:18.814837  160242 main.go:141] libmachine: Decoding PEM data...
	I0729 12:19:18.814871  160242 main.go:141] libmachine: Parsing certificate...
	I0729 12:19:18.814938  160242 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 12:19:18.814969  160242 main.go:141] libmachine: Decoding PEM data...
	I0729 12:19:18.814986  160242 main.go:141] libmachine: Parsing certificate...
	I0729 12:19:18.815008  160242 main.go:141] libmachine: Running pre-create checks...
	I0729 12:19:18.815022  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .PreCreateCheck
	I0729 12:19:18.815533  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetConfigRaw
	I0729 12:19:18.816002  160242 main.go:141] libmachine: Creating machine...
	I0729 12:19:18.816014  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .Create
	I0729 12:19:18.816170  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Creating KVM machine...
	I0729 12:19:18.817496  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found existing default KVM network
	I0729 12:19:18.818500  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:18.818311  160596 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:04:0f} reservation:<nil>}
	I0729 12:19:18.819260  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:18.819173  160596 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002130a0}
	I0729 12:19:18.819297  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | created network xml: 
	I0729 12:19:18.819310  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | <network>
	I0729 12:19:18.819324  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |   <name>mk-kubernetes-upgrade-714444</name>
	I0729 12:19:18.819335  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |   <dns enable='no'/>
	I0729 12:19:18.819342  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |   
	I0729 12:19:18.819354  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 12:19:18.819374  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |     <dhcp>
	I0729 12:19:18.819387  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 12:19:18.819405  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |     </dhcp>
	I0729 12:19:18.819416  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |   </ip>
	I0729 12:19:18.819425  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG |   
	I0729 12:19:18.819434  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | </network>
	I0729 12:19:18.819445  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | 
	I0729 12:19:18.825188  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | trying to create private KVM network mk-kubernetes-upgrade-714444 192.168.50.0/24...
	I0729 12:19:18.897574  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444 ...
	I0729 12:19:18.897615  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:19:18.897627  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | private KVM network mk-kubernetes-upgrade-714444 192.168.50.0/24 created
	I0729 12:19:18.897647  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:18.897496  160596 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:19:18.897666  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 12:19:19.142503  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:19.142350  160596 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa...
	I0729 12:19:19.533735  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:19.533575  160596 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/kubernetes-upgrade-714444.rawdisk...
	I0729 12:19:19.533766  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Writing magic tar header
	I0729 12:19:19.533785  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Writing SSH key tar header
	I0729 12:19:19.533804  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:19.533707  160596 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444 ...
	I0729 12:19:19.533821  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444
	I0729 12:19:19.533926  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444 (perms=drwx------)
	I0729 12:19:19.533955  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 12:19:19.533967  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 12:19:19.534010  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:19:19.534037  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 12:19:19.534053  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 12:19:19.534071  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 12:19:19.534086  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 12:19:19.534097  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 12:19:19.534109  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Creating domain...
	I0729 12:19:19.534129  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 12:19:19.534141  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Checking permissions on dir: /home/jenkins
	I0729 12:19:19.534148  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Checking permissions on dir: /home
	I0729 12:19:19.534154  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Skipping /home - not owner
	I0729 12:19:19.535330  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) define libvirt domain using xml: 
	I0729 12:19:19.535360  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) <domain type='kvm'>
	I0729 12:19:19.535372  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   <name>kubernetes-upgrade-714444</name>
	I0729 12:19:19.535388  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   <memory unit='MiB'>2200</memory>
	I0729 12:19:19.535401  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   <vcpu>2</vcpu>
	I0729 12:19:19.535411  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   <features>
	I0729 12:19:19.535419  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <acpi/>
	I0729 12:19:19.535429  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <apic/>
	I0729 12:19:19.535444  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <pae/>
	I0729 12:19:19.535455  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     
	I0729 12:19:19.535466  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   </features>
	I0729 12:19:19.535477  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   <cpu mode='host-passthrough'>
	I0729 12:19:19.535490  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   
	I0729 12:19:19.535506  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   </cpu>
	I0729 12:19:19.535539  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   <os>
	I0729 12:19:19.535562  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <type>hvm</type>
	I0729 12:19:19.535610  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <boot dev='cdrom'/>
	I0729 12:19:19.535651  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <boot dev='hd'/>
	I0729 12:19:19.535666  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <bootmenu enable='no'/>
	I0729 12:19:19.535680  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   </os>
	I0729 12:19:19.535693  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   <devices>
	I0729 12:19:19.535704  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <disk type='file' device='cdrom'>
	I0729 12:19:19.535721  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/boot2docker.iso'/>
	I0729 12:19:19.535734  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <target dev='hdc' bus='scsi'/>
	I0729 12:19:19.535744  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <readonly/>
	I0729 12:19:19.535765  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     </disk>
	I0729 12:19:19.535778  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <disk type='file' device='disk'>
	I0729 12:19:19.535788  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 12:19:19.535805  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/kubernetes-upgrade-714444.rawdisk'/>
	I0729 12:19:19.535817  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <target dev='hda' bus='virtio'/>
	I0729 12:19:19.535827  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     </disk>
	I0729 12:19:19.535842  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <interface type='network'>
	I0729 12:19:19.535856  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <source network='mk-kubernetes-upgrade-714444'/>
	I0729 12:19:19.535866  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <model type='virtio'/>
	I0729 12:19:19.535878  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     </interface>
	I0729 12:19:19.535886  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <interface type='network'>
	I0729 12:19:19.535899  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <source network='default'/>
	I0729 12:19:19.535910  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <model type='virtio'/>
	I0729 12:19:19.535923  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     </interface>
	I0729 12:19:19.535933  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <serial type='pty'>
	I0729 12:19:19.535942  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <target port='0'/>
	I0729 12:19:19.535952  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     </serial>
	I0729 12:19:19.535960  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <console type='pty'>
	I0729 12:19:19.535971  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <target type='serial' port='0'/>
	I0729 12:19:19.535989  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     </console>
	I0729 12:19:19.536010  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     <rng model='virtio'>
	I0729 12:19:19.536025  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)       <backend model='random'>/dev/random</backend>
	I0729 12:19:19.536035  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     </rng>
	I0729 12:19:19.536047  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     
	I0729 12:19:19.536056  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)     
	I0729 12:19:19.536078  160242 main.go:141] libmachine: (kubernetes-upgrade-714444)   </devices>
	I0729 12:19:19.536089  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) </domain>
	I0729 12:19:19.536102  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) 
	I0729 12:19:19.543294  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:2e:f9:1b in network default
	I0729 12:19:19.543848  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Ensuring networks are active...
	I0729 12:19:19.543876  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:19.544563  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Ensuring network default is active
	I0729 12:19:19.544840  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Ensuring network mk-kubernetes-upgrade-714444 is active
	I0729 12:19:19.545344  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Getting domain xml...
	I0729 12:19:19.546149  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Creating domain...
	I0729 12:19:20.891595  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Waiting to get IP...
	I0729 12:19:20.892626  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:20.893155  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:20.893187  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:20.893136  160596 retry.go:31] will retry after 245.467389ms: waiting for machine to come up
	I0729 12:19:21.140889  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:21.141365  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:21.141395  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:21.141334  160596 retry.go:31] will retry after 367.787587ms: waiting for machine to come up
	I0729 12:19:21.511177  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:21.511714  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:21.511745  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:21.511664  160596 retry.go:31] will retry after 429.2129ms: waiting for machine to come up
	I0729 12:19:21.942136  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:21.942650  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:21.942678  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:21.942591  160596 retry.go:31] will retry after 595.882533ms: waiting for machine to come up
	I0729 12:19:22.540579  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:22.541028  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:22.541054  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:22.541000  160596 retry.go:31] will retry after 567.070774ms: waiting for machine to come up
	I0729 12:19:23.109496  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:23.110088  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:23.110116  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:23.110039  160596 retry.go:31] will retry after 576.409901ms: waiting for machine to come up
	I0729 12:19:23.687854  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:23.688295  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:23.688326  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:23.688237  160596 retry.go:31] will retry after 847.12828ms: waiting for machine to come up
	I0729 12:19:24.536811  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:24.537266  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:24.537291  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:24.537223  160596 retry.go:31] will retry after 1.067901338s: waiting for machine to come up
	I0729 12:19:25.607418  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:25.607928  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:25.607957  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:25.607874  160596 retry.go:31] will retry after 1.687318706s: waiting for machine to come up
	I0729 12:19:27.297178  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:27.297659  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:27.297682  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:27.297610  160596 retry.go:31] will retry after 2.058929603s: waiting for machine to come up
	I0729 12:19:29.358735  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:29.359204  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:29.359231  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:29.359157  160596 retry.go:31] will retry after 2.16311142s: waiting for machine to come up
	I0729 12:19:31.525644  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:31.525992  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:31.526016  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:31.525950  160596 retry.go:31] will retry after 2.715547956s: waiting for machine to come up
	I0729 12:19:34.242606  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:34.242996  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:34.243028  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:34.242931  160596 retry.go:31] will retry after 4.250903346s: waiting for machine to come up
	I0729 12:19:38.498187  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:38.498606  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:19:38.498634  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:19:38.498557  160596 retry.go:31] will retry after 5.401510854s: waiting for machine to come up
	I0729 12:19:43.901460  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:43.901994  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has current primary IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:43.902021  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Found IP for machine: 192.168.50.36
	I0729 12:19:43.902035  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Reserving static IP address...
	I0729 12:19:43.902423  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-714444", mac: "52:54:00:92:96:14", ip: "192.168.50.36"} in network mk-kubernetes-upgrade-714444
	I0729 12:19:43.991215  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Getting to WaitForSSH function...
	I0729 12:19:43.991243  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Reserved static IP address: 192.168.50.36
	I0729 12:19:43.991261  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Waiting for SSH to be available...
	I0729 12:19:43.994039  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:43.994548  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:96:14}
	I0729 12:19:43.994583  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:43.994671  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Using SSH client type: external
	I0729 12:19:43.994696  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa (-rw-------)
	I0729 12:19:43.994723  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:19:43.994738  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | About to run SSH command:
	I0729 12:19:43.994751  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | exit 0
	I0729 12:19:44.120979  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | SSH cmd err, output: <nil>: 
	I0729 12:19:44.121182  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) KVM machine creation complete!
	I0729 12:19:44.121555  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetConfigRaw
	I0729 12:19:44.122107  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:44.122311  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:44.122495  160242 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 12:19:44.122512  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetState
	I0729 12:19:44.123741  160242 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 12:19:44.123765  160242 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 12:19:44.123773  160242 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 12:19:44.123782  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:44.126251  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.126622  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:44.126657  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.126861  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:44.127051  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.127204  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.127342  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:44.127497  160242 main.go:141] libmachine: Using SSH client type: native
	I0729 12:19:44.127742  160242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:19:44.127767  160242 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 12:19:44.240265  160242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:19:44.240289  160242 main.go:141] libmachine: Detecting the provisioner...
	I0729 12:19:44.240300  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:44.243103  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.243458  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:44.243500  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.243681  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:44.243907  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.244132  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.244270  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:44.244429  160242 main.go:141] libmachine: Using SSH client type: native
	I0729 12:19:44.244661  160242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:19:44.244674  160242 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 12:19:44.359015  160242 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 12:19:44.359101  160242 main.go:141] libmachine: found compatible host: buildroot
	I0729 12:19:44.359112  160242 main.go:141] libmachine: Provisioning with buildroot...
	I0729 12:19:44.359125  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:19:44.359402  160242 buildroot.go:166] provisioning hostname "kubernetes-upgrade-714444"
	I0729 12:19:44.359434  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:19:44.359633  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:44.362402  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.362783  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:44.362813  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.362963  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:44.363156  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.363321  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.363483  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:44.363682  160242 main.go:141] libmachine: Using SSH client type: native
	I0729 12:19:44.363860  160242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:19:44.363880  160242 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-714444 && echo "kubernetes-upgrade-714444" | sudo tee /etc/hostname
	I0729 12:19:44.496506  160242 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-714444
	
	I0729 12:19:44.496562  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:44.499546  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.499973  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:44.500012  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.500181  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:44.500400  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.500607  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.500773  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:44.500949  160242 main.go:141] libmachine: Using SSH client type: native
	I0729 12:19:44.501183  160242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:19:44.501202  160242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-714444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-714444/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-714444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:19:44.629266  160242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:19:44.629299  160242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 12:19:44.629353  160242 buildroot.go:174] setting up certificates
	I0729 12:19:44.629367  160242 provision.go:84] configureAuth start
	I0729 12:19:44.629379  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:19:44.629681  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:19:44.632200  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.632549  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:44.632582  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.632735  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:44.635109  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.635454  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:44.635484  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.635657  160242 provision.go:143] copyHostCerts
	I0729 12:19:44.635743  160242 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 12:19:44.635755  160242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:19:44.635811  160242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 12:19:44.635905  160242 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 12:19:44.635912  160242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:19:44.635931  160242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 12:19:44.635980  160242 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 12:19:44.635987  160242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:19:44.636004  160242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 12:19:44.636052  160242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-714444 san=[127.0.0.1 192.168.50.36 kubernetes-upgrade-714444 localhost minikube]
	I0729 12:19:44.834236  160242 provision.go:177] copyRemoteCerts
	I0729 12:19:44.834307  160242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:19:44.834346  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:44.836922  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.837257  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:44.837293  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:44.837476  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:44.837686  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:44.837821  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:44.837938  160242 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:19:44.927629  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:19:44.953915  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 12:19:44.980150  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:19:45.006432  160242 provision.go:87] duration metric: took 377.04737ms to configureAuth
	I0729 12:19:45.006467  160242 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:19:45.006703  160242 config.go:182] Loaded profile config "kubernetes-upgrade-714444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 12:19:45.006810  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:45.009881  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.010257  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.010292  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.010520  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:45.010742  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:45.010929  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:45.011065  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:45.011222  160242 main.go:141] libmachine: Using SSH client type: native
	I0729 12:19:45.011387  160242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:19:45.011403  160242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:19:45.290415  160242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:19:45.290466  160242 main.go:141] libmachine: Checking connection to Docker...
	I0729 12:19:45.290478  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetURL
	I0729 12:19:45.291823  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Using libvirt version 6000000
	I0729 12:19:45.294183  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.294539  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.294573  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.294802  160242 main.go:141] libmachine: Docker is up and running!
	I0729 12:19:45.294822  160242 main.go:141] libmachine: Reticulating splines...
	I0729 12:19:45.294829  160242 client.go:171] duration metric: took 26.48005942s to LocalClient.Create
	I0729 12:19:45.294853  160242 start.go:167] duration metric: took 26.480129344s to libmachine.API.Create "kubernetes-upgrade-714444"
	I0729 12:19:45.294864  160242 start.go:293] postStartSetup for "kubernetes-upgrade-714444" (driver="kvm2")
	I0729 12:19:45.294878  160242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:19:45.294903  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:45.295170  160242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:19:45.295205  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:45.298212  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.298656  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.298685  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.298864  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:45.299060  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:45.299211  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:45.299389  160242 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:19:45.387438  160242 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:19:45.391992  160242 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:19:45.392028  160242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 12:19:45.392098  160242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 12:19:45.392170  160242 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 12:19:45.392257  160242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:19:45.401577  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:19:45.424885  160242 start.go:296] duration metric: took 130.002118ms for postStartSetup
	I0729 12:19:45.424984  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetConfigRaw
	I0729 12:19:45.425637  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:19:45.428479  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.428824  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.428857  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.429131  160242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/config.json ...
	I0729 12:19:45.429323  160242 start.go:128] duration metric: took 26.635570481s to createHost
	I0729 12:19:45.429347  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:45.431642  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.431984  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.432043  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.432233  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:45.432430  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:45.432608  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:45.432753  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:45.432900  160242 main.go:141] libmachine: Using SSH client type: native
	I0729 12:19:45.433127  160242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:19:45.433141  160242 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 12:19:45.545562  160242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722255585.523305200
	
	I0729 12:19:45.545588  160242 fix.go:216] guest clock: 1722255585.523305200
	I0729 12:19:45.545597  160242 fix.go:229] Guest: 2024-07-29 12:19:45.5233052 +0000 UTC Remote: 2024-07-29 12:19:45.429334451 +0000 UTC m=+50.712904254 (delta=93.970749ms)
	I0729 12:19:45.545623  160242 fix.go:200] guest clock delta is within tolerance: 93.970749ms
	I0729 12:19:45.545630  160242 start.go:83] releasing machines lock for "kubernetes-upgrade-714444", held for 26.752026121s
	I0729 12:19:45.545662  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:45.545968  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:19:45.549135  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.549493  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.549524  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.549678  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:45.550169  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:45.550377  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:19:45.550443  160242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:19:45.550505  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:45.550555  160242 ssh_runner.go:195] Run: cat /version.json
	I0729 12:19:45.550580  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:19:45.553449  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.553636  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.553877  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.553903  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.554067  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:45.554079  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:45.554166  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:45.554256  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:19:45.554351  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:45.554420  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:19:45.554492  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:45.554611  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:19:45.554642  160242 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:19:45.554729  160242 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:19:45.670764  160242 ssh_runner.go:195] Run: systemctl --version
	I0729 12:19:45.677102  160242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:19:45.842660  160242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:19:45.848893  160242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:19:45.848999  160242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:19:45.865438  160242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:19:45.865472  160242 start.go:495] detecting cgroup driver to use...
	I0729 12:19:45.865563  160242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:19:45.891092  160242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:19:45.906707  160242 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:19:45.906807  160242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:19:45.921390  160242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:19:45.935972  160242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:19:46.053941  160242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:19:46.204314  160242 docker.go:233] disabling docker service ...
	I0729 12:19:46.204402  160242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:19:46.218554  160242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:19:46.235783  160242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:19:46.365279  160242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:19:46.480250  160242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:19:46.495973  160242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:19:46.518255  160242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 12:19:46.518341  160242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:19:46.531368  160242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:19:46.531442  160242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:19:46.542277  160242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:19:46.553190  160242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:19:46.563795  160242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:19:46.574360  160242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:19:46.586495  160242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:19:46.586586  160242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:19:46.603598  160242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:19:46.613334  160242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:19:46.729460  160242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:19:46.862391  160242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:19:46.862480  160242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:19:46.867074  160242 start.go:563] Will wait 60s for crictl version
	I0729 12:19:46.867143  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:46.870948  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:19:46.905412  160242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:19:46.905501  160242 ssh_runner.go:195] Run: crio --version
	I0729 12:19:46.932468  160242 ssh_runner.go:195] Run: crio --version
	I0729 12:19:46.965712  160242 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 12:19:46.967124  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:19:46.972015  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:46.972400  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:19:33 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:19:46.972427  160242 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:19:46.972815  160242 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 12:19:46.978588  160242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:19:46.993684  160242 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-714444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:19:46.993818  160242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 12:19:46.993880  160242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:19:47.026995  160242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 12:19:47.027070  160242 ssh_runner.go:195] Run: which lz4
	I0729 12:19:47.031755  160242 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 12:19:47.036610  160242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:19:47.036660  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 12:19:48.621588  160242 crio.go:462] duration metric: took 1.589880939s to copy over tarball
	I0729 12:19:48.621692  160242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:19:51.224164  160242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.602440212s)
	I0729 12:19:51.224201  160242 crio.go:469] duration metric: took 2.602578693s to extract the tarball
	I0729 12:19:51.224208  160242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 12:19:51.265959  160242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:19:51.308132  160242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 12:19:51.308163  160242 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 12:19:51.308240  160242 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:19:51.308270  160242 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 12:19:51.308290  160242 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 12:19:51.308300  160242 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 12:19:51.308327  160242 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 12:19:51.308362  160242 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 12:19:51.308388  160242 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 12:19:51.308279  160242 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 12:19:51.309855  160242 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 12:19:51.309863  160242 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 12:19:51.309884  160242 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 12:19:51.309870  160242 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 12:19:51.309932  160242 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:19:51.309961  160242 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 12:19:51.309980  160242 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 12:19:51.310120  160242 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 12:19:51.489381  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 12:19:51.496075  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 12:19:51.504121  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 12:19:51.512833  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 12:19:51.523378  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 12:19:51.567276  160242 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 12:19:51.567347  160242 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 12:19:51.567403  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:51.578631  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 12:19:51.587302  160242 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 12:19:51.587351  160242 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 12:19:51.587407  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:51.606427  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 12:19:51.614971  160242 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 12:19:51.615023  160242 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 12:19:51.615078  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:51.630465  160242 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 12:19:51.630511  160242 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 12:19:51.630568  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:51.672371  160242 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 12:19:51.672420  160242 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 12:19:51.672466  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:51.672476  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 12:19:51.678022  160242 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 12:19:51.678059  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 12:19:51.678066  160242 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 12:19:51.678117  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:51.680061  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 12:19:51.680086  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 12:19:51.680102  160242 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 12:19:51.680127  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 12:19:51.680139  160242 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 12:19:51.680175  160242 ssh_runner.go:195] Run: which crictl
	I0729 12:19:51.766123  160242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 12:19:51.781000  160242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 12:19:51.781052  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 12:19:51.781091  160242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 12:19:51.781161  160242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 12:19:51.782372  160242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 12:19:51.782388  160242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 12:19:51.829998  160242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 12:19:51.830055  160242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 12:19:51.936080  160242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 12:19:52.076813  160242 cache_images.go:92] duration metric: took 768.626613ms to LoadCachedImages
	W0729 12:19:52.076959  160242 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19336-113730/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 12:19:52.076995  160242 kubeadm.go:934] updating node { 192.168.50.36 8443 v1.20.0 crio true true} ...
	I0729 12:19:52.077139  160242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-714444 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:19:52.077234  160242 ssh_runner.go:195] Run: crio config
	I0729 12:19:52.120794  160242 cni.go:84] Creating CNI manager for ""
	I0729 12:19:52.120819  160242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:19:52.120832  160242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:19:52.120850  160242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-714444 NodeName:kubernetes-upgrade-714444 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 12:19:52.121012  160242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-714444"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:19:52.121077  160242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 12:19:52.130769  160242 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:19:52.130847  160242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:19:52.140148  160242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0729 12:19:52.156800  160242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:19:52.172912  160242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0729 12:19:52.189209  160242 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0729 12:19:52.193177  160242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:19:52.205088  160242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:19:52.315335  160242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:19:52.332249  160242 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444 for IP: 192.168.50.36
	I0729 12:19:52.332273  160242 certs.go:194] generating shared ca certs ...
	I0729 12:19:52.332294  160242 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:19:52.332479  160242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 12:19:52.332541  160242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 12:19:52.332574  160242 certs.go:256] generating profile certs ...
	I0729 12:19:52.332650  160242 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.key
	I0729 12:19:52.332667  160242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.crt with IP's: []
	I0729 12:19:52.393213  160242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.crt ...
	I0729 12:19:52.393247  160242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.crt: {Name:mk27ffb1dc771b2072bf659b3af62401d32078f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:19:52.393430  160242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.key ...
	I0729 12:19:52.393482  160242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.key: {Name:mk5a84afef8c59955e0e55127634e22a02a18d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:19:52.393603  160242 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key.24ba74ec
	I0729 12:19:52.393625  160242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.crt.24ba74ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.36]
	I0729 12:19:52.564369  160242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.crt.24ba74ec ...
	I0729 12:19:52.564409  160242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.crt.24ba74ec: {Name:mk999f786a5a49b4ecbabe50764ddc6ad699ab5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:19:52.564606  160242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key.24ba74ec ...
	I0729 12:19:52.564624  160242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key.24ba74ec: {Name:mk7781ea8bccec4945d2d874a36d43d169b27834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:19:52.564698  160242 certs.go:381] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.crt.24ba74ec -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.crt
	I0729 12:19:52.564779  160242 certs.go:385] copying /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key.24ba74ec -> /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key
	I0729 12:19:52.564829  160242 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.key
	I0729 12:19:52.564844  160242 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.crt with IP's: []
	I0729 12:19:52.678877  160242 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.crt ...
	I0729 12:19:52.678912  160242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.crt: {Name:mk6555b548f81e1b31f04780b74547c280121483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:19:52.679068  160242 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.key ...
	I0729 12:19:52.679084  160242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.key: {Name:mkcccdde9df6e3bb3f9a1df19779f8202111e96b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:19:52.679271  160242 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 12:19:52.679311  160242 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 12:19:52.679321  160242 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 12:19:52.679341  160242 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:19:52.679364  160242 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:19:52.679384  160242 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 12:19:52.679419  160242 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:19:52.679981  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:19:52.704669  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:19:52.728650  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:19:52.756339  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:19:52.780531  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 12:19:52.804410  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 12:19:52.828257  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:19:52.852021  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 12:19:52.876798  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 12:19:52.900796  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:19:52.925295  160242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 12:19:52.951612  160242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:19:52.969045  160242 ssh_runner.go:195] Run: openssl version
	I0729 12:19:52.975188  160242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 12:19:52.986127  160242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 12:19:52.990927  160242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:19:52.991001  160242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 12:19:52.996878  160242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:19:53.007646  160242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:19:53.018558  160242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:19:53.023354  160242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:19:53.023426  160242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:19:53.030880  160242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:19:53.044840  160242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 12:19:53.056626  160242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 12:19:53.061147  160242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:19:53.061216  160242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 12:19:53.067010  160242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 12:19:53.077484  160242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:19:53.081807  160242 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 12:19:53.081862  160242 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-714444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:19:53.081960  160242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:19:53.082037  160242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:19:53.117273  160242 cri.go:89] found id: ""
	I0729 12:19:53.117354  160242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 12:19:53.130246  160242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 12:19:53.140090  160242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:19:53.149478  160242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:19:53.149510  160242 kubeadm.go:157] found existing configuration files:
	
	I0729 12:19:53.149572  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:19:53.167417  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:19:53.167499  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:19:53.179605  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:19:53.192267  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:19:53.192346  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:19:53.213106  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:19:53.223031  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:19:53.223111  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:19:53.237034  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:19:53.247131  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:19:53.247212  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:19:53.256862  160242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 12:19:53.366980  160242 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 12:19:53.367065  160242 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 12:19:53.510383  160242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 12:19:53.510546  160242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 12:19:53.510711  160242 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 12:19:53.702547  160242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 12:19:53.803586  160242 out.go:204]   - Generating certificates and keys ...
	I0729 12:19:53.803716  160242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 12:19:53.803833  160242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 12:19:53.880092  160242 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 12:19:53.975619  160242 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 12:19:54.166482  160242 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 12:19:54.250806  160242 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 12:19:54.521263  160242 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 12:19:54.521558  160242 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-714444 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	I0729 12:19:54.775006  160242 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 12:19:54.775311  160242 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-714444 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	I0729 12:19:54.954102  160242 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 12:19:55.228269  160242 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 12:19:55.327332  160242 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 12:19:55.327414  160242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 12:19:55.386899  160242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 12:19:55.601928  160242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 12:19:55.914862  160242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 12:19:56.072608  160242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 12:19:56.102617  160242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 12:19:56.102798  160242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 12:19:56.102877  160242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 12:19:56.242351  160242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 12:19:56.244507  160242 out.go:204]   - Booting up control plane ...
	I0729 12:19:56.244663  160242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 12:19:56.248841  160242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 12:19:56.257407  160242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 12:19:56.258529  160242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 12:19:56.264305  160242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 12:20:36.259106  160242 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 12:20:36.259506  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:20:36.259818  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:20:41.260152  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:20:41.260484  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:20:51.259305  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:20:51.259545  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:21:11.258962  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:21:11.259237  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:21:51.260369  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:21:51.260648  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:21:51.260679  160242 kubeadm.go:310] 
	I0729 12:21:51.260741  160242 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 12:21:51.260798  160242 kubeadm.go:310] 		timed out waiting for the condition
	I0729 12:21:51.260807  160242 kubeadm.go:310] 
	I0729 12:21:51.260857  160242 kubeadm.go:310] 	This error is likely caused by:
	I0729 12:21:51.260899  160242 kubeadm.go:310] 		- The kubelet is not running
	I0729 12:21:51.261057  160242 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 12:21:51.261075  160242 kubeadm.go:310] 
	I0729 12:21:51.261201  160242 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 12:21:51.261248  160242 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 12:21:51.261295  160242 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 12:21:51.261305  160242 kubeadm.go:310] 
	I0729 12:21:51.261447  160242 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 12:21:51.261576  160242 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 12:21:51.261588  160242 kubeadm.go:310] 
	I0729 12:21:51.261751  160242 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 12:21:51.261868  160242 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 12:21:51.261987  160242 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 12:21:51.262106  160242 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 12:21:51.262139  160242 kubeadm.go:310] 
	I0729 12:21:51.262231  160242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 12:21:51.262315  160242 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 12:21:51.262411  160242 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 12:21:51.262511  160242 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-714444 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-714444 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-714444 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-714444 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 12:21:51.262576  160242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 12:21:52.246862  160242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:21:52.261205  160242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:21:52.274665  160242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:21:52.274699  160242 kubeadm.go:157] found existing configuration files:
	
	I0729 12:21:52.274780  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:21:52.287483  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:21:52.287572  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:21:52.300735  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:21:52.311275  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:21:52.311359  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:21:52.321472  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:21:52.330802  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:21:52.330881  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:21:52.340470  160242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:21:52.349623  160242 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:21:52.349705  160242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:21:52.358632  160242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 12:21:52.424880  160242 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 12:21:52.424988  160242 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 12:21:52.552027  160242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 12:21:52.552322  160242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 12:21:52.552600  160242 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 12:21:52.741089  160242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 12:21:52.743256  160242 out.go:204]   - Generating certificates and keys ...
	I0729 12:21:52.743386  160242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 12:21:52.743522  160242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 12:21:52.743658  160242 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 12:21:52.743766  160242 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 12:21:52.743915  160242 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 12:21:52.744007  160242 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 12:21:52.744131  160242 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 12:21:52.744211  160242 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 12:21:52.744322  160242 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 12:21:52.744451  160242 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 12:21:52.744524  160242 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 12:21:52.744610  160242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 12:21:52.976111  160242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 12:21:53.258428  160242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 12:21:53.532634  160242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 12:21:53.630167  160242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 12:21:53.647711  160242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 12:21:53.648324  160242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 12:21:53.648468  160242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 12:21:53.793422  160242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 12:21:53.796468  160242 out.go:204]   - Booting up control plane ...
	I0729 12:21:53.796592  160242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 12:21:53.805594  160242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 12:21:53.813854  160242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 12:21:53.815471  160242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 12:21:53.821331  160242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 12:22:33.824572  160242 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 12:22:33.824719  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:22:33.824984  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:22:38.825392  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:22:38.825691  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:22:48.826011  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:22:48.826264  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:23:08.825302  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:23:08.825632  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:23:48.825051  160242 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 12:23:48.825368  160242 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 12:23:48.825391  160242 kubeadm.go:310] 
	I0729 12:23:48.825470  160242 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 12:23:48.825554  160242 kubeadm.go:310] 		timed out waiting for the condition
	I0729 12:23:48.825564  160242 kubeadm.go:310] 
	I0729 12:23:48.825613  160242 kubeadm.go:310] 	This error is likely caused by:
	I0729 12:23:48.825661  160242 kubeadm.go:310] 		- The kubelet is not running
	I0729 12:23:48.825784  160242 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 12:23:48.825796  160242 kubeadm.go:310] 
	I0729 12:23:48.825957  160242 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 12:23:48.826018  160242 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 12:23:48.826075  160242 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 12:23:48.826087  160242 kubeadm.go:310] 
	I0729 12:23:48.826207  160242 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 12:23:48.826319  160242 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 12:23:48.826330  160242 kubeadm.go:310] 
	I0729 12:23:48.826464  160242 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 12:23:48.826579  160242 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 12:23:48.826691  160242 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 12:23:48.826786  160242 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 12:23:48.826822  160242 kubeadm.go:310] 
	I0729 12:23:48.826977  160242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 12:23:48.827074  160242 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 12:23:48.827214  160242 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 12:23:48.827256  160242 kubeadm.go:394] duration metric: took 3m55.745398998s to StartCluster
	I0729 12:23:48.827317  160242 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 12:23:48.827385  160242 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 12:23:48.864216  160242 cri.go:89] found id: ""
	I0729 12:23:48.864244  160242 logs.go:276] 0 containers: []
	W0729 12:23:48.864255  160242 logs.go:278] No container was found matching "kube-apiserver"
	I0729 12:23:48.864263  160242 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 12:23:48.864337  160242 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 12:23:48.895617  160242 cri.go:89] found id: ""
	I0729 12:23:48.895646  160242 logs.go:276] 0 containers: []
	W0729 12:23:48.895654  160242 logs.go:278] No container was found matching "etcd"
	I0729 12:23:48.895660  160242 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 12:23:48.895727  160242 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 12:23:48.927355  160242 cri.go:89] found id: ""
	I0729 12:23:48.927389  160242 logs.go:276] 0 containers: []
	W0729 12:23:48.927400  160242 logs.go:278] No container was found matching "coredns"
	I0729 12:23:48.927408  160242 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 12:23:48.927486  160242 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 12:23:48.960982  160242 cri.go:89] found id: ""
	I0729 12:23:48.961016  160242 logs.go:276] 0 containers: []
	W0729 12:23:48.961027  160242 logs.go:278] No container was found matching "kube-scheduler"
	I0729 12:23:48.961033  160242 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 12:23:48.961108  160242 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 12:23:48.994379  160242 cri.go:89] found id: ""
	I0729 12:23:48.994405  160242 logs.go:276] 0 containers: []
	W0729 12:23:48.994413  160242 logs.go:278] No container was found matching "kube-proxy"
	I0729 12:23:48.994419  160242 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 12:23:48.994480  160242 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 12:23:49.026336  160242 cri.go:89] found id: ""
	I0729 12:23:49.026373  160242 logs.go:276] 0 containers: []
	W0729 12:23:49.026385  160242 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 12:23:49.026393  160242 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 12:23:49.026465  160242 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 12:23:49.063213  160242 cri.go:89] found id: ""
	I0729 12:23:49.063243  160242 logs.go:276] 0 containers: []
	W0729 12:23:49.063254  160242 logs.go:278] No container was found matching "kindnet"
	I0729 12:23:49.063280  160242 logs.go:123] Gathering logs for kubelet ...
	I0729 12:23:49.063310  160242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 12:23:49.115024  160242 logs.go:123] Gathering logs for dmesg ...
	I0729 12:23:49.115069  160242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 12:23:49.128882  160242 logs.go:123] Gathering logs for describe nodes ...
	I0729 12:23:49.128912  160242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 12:23:49.253169  160242 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 12:23:49.253194  160242 logs.go:123] Gathering logs for CRI-O ...
	I0729 12:23:49.253207  160242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 12:23:49.355116  160242 logs.go:123] Gathering logs for container status ...
	I0729 12:23:49.355162  160242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 12:23:49.393848  160242 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 12:23:49.393893  160242 out.go:239] * 
	* 
	W0729 12:23:49.393954  160242 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 12:23:49.393986  160242 out.go:239] * 
	* 
	W0729 12:23:49.394880  160242 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 12:23:49.398229  160242 out.go:177] 
	W0729 12:23:49.399568  160242 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 12:23:49.399639  160242 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 12:23:49.399668  160242 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 12:23:49.401208  160242 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-714444
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-714444: (1.42462263s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-714444 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-714444 status --format={{.Host}}: exit status 7 (65.701466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0729 12:24:10.441811  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.996499195s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-714444 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (96.84621ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-714444] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-714444
	    minikube start -p kubernetes-upgrade-714444 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7144442 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-714444 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-714444 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (32.286563126s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 12:25:10.402029429 +0000 UTC m=+5966.866561267
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-714444 -n kubernetes-upgrade-714444
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-714444 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-714444 logs -n 25: (1.654388273s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-827339 sudo cat                            | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo cat                            | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo docker                         | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo cat                            | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo cat                            | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo cat                            | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo cat                            | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo                                | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo find                           | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-827339 sudo crio                           | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-827339                                     | cilium-827339          | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC | 29 Jul 24 12:24 UTC |
	| start   | -p old-k8s-version-012814                            | old-k8s-version-012814 | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                        |         |         |                     |                     |
	| ssh     | cert-options-882510 ssh                              | cert-options-882510    | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | openssl x509 -text -noout -in                        |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                        |         |         |                     |                     |
	| ssh     | -p cert-options-882510 -- sudo                       | cert-options-882510    | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                        |         |         |                     |                     |
	| delete  | -p cert-options-882510                               | cert-options-882510    | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC | 29 Jul 24 12:25 UTC |
	| start   | -p no-preload-384199 --memory=2200                   | no-preload-384199      | jenkins | v1.33.1 | 29 Jul 24 12:25 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:25:09
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:25:09.293398  168314 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:25:09.293551  168314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:25:09.293562  168314 out.go:304] Setting ErrFile to fd 2...
	I0729 12:25:09.293567  168314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:25:09.293757  168314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:25:09.294744  168314 out.go:298] Setting JSON to false
	I0729 12:25:09.296278  168314 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7660,"bootTime":1722248249,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:25:09.296350  168314 start.go:139] virtualization: kvm guest
	I0729 12:25:09.298251  168314 out.go:177] * [no-preload-384199] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:25:09.299772  168314 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:25:09.299795  168314 notify.go:220] Checking for updates...
	I0729 12:25:09.302268  168314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:25:09.303482  168314 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:25:09.304749  168314 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:25:09.305949  168314 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:25:09.307279  168314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:25:09.308945  168314 config.go:182] Loaded profile config "cert-expiration-524248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:25:09.309084  168314 config.go:182] Loaded profile config "kubernetes-upgrade-714444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:25:09.309177  168314 config.go:182] Loaded profile config "old-k8s-version-012814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 12:25:09.309258  168314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:25:09.348409  168314 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 12:25:09.349649  168314 start.go:297] selected driver: kvm2
	I0729 12:25:09.349673  168314 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:25:09.349702  168314 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:25:09.350952  168314 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:25:09.351065  168314 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:25:09.368924  168314 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:25:09.369034  168314 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:25:09.369334  168314 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:25:09.369408  168314 cni.go:84] Creating CNI manager for ""
	I0729 12:25:09.369422  168314 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:25:09.369430  168314 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:25:09.369519  168314 start.go:340] cluster config:
	{Name:no-preload-384199 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-384199 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:25:09.369646  168314 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:25:09.372278  168314 out.go:177] * Starting "no-preload-384199" primary control-plane node in "no-preload-384199" cluster
	I0729 12:25:09.222095  166151 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:25:09.222112  166151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 12:25:09.222133  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:25:09.225262  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:25:09.225713  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:25:09.225738  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:25:09.225975  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:25:09.226119  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:25:09.226312  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:25:09.226443  166151 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:25:09.236139  166151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0729 12:25:09.236699  166151 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:25:09.237254  166151 main.go:141] libmachine: Using API Version  1
	I0729 12:25:09.237285  166151 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:25:09.237722  166151 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:25:09.237958  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetState
	I0729 12:25:09.240135  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:25:09.240549  166151 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 12:25:09.240568  166151 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 12:25:09.240589  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:25:09.243529  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:25:09.244046  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:25:09.244074  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:25:09.244264  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:25:09.244489  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:25:09.244690  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:25:09.244829  166151 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:25:09.390206  166151 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:25:09.411109  166151 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:25:09.411194  166151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:25:09.432108  166151 api_server.go:72] duration metric: took 265.831772ms to wait for apiserver process to appear ...
	I0729 12:25:09.432134  166151 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:25:09.432155  166151 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0729 12:25:09.438650  166151 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0729 12:25:09.439714  166151 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 12:25:09.439740  166151 api_server.go:131] duration metric: took 7.599168ms to wait for apiserver health ...
	I0729 12:25:09.439751  166151 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:25:09.447157  166151 system_pods.go:59] 8 kube-system pods found
	I0729 12:25:09.447189  166151 system_pods.go:61] "coredns-5cfdc65f69-2lb5s" [fe57d5b2-eb3b-4cd4-a674-557219e1f61b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 12:25:09.447196  166151 system_pods.go:61] "coredns-5cfdc65f69-x6qg5" [0991ad10-fc32-49c0-8ad1-e05700551746] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 12:25:09.447204  166151 system_pods.go:61] "etcd-kubernetes-upgrade-714444" [de3220a9-a435-404c-8883-bd0bd942aa98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 12:25:09.447210  166151 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-714444" [79eac8cc-54bb-4c08-a043-690da3be1088] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 12:25:09.447218  166151 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-714444" [fbaee085-b44b-4af8-9c55-c8c1b6eae17f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 12:25:09.447222  166151 system_pods.go:61] "kube-proxy-62xt2" [d936b416-5e70-4b27-90f8-18171b944aa7] Running
	I0729 12:25:09.447227  166151 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-714444" [4620edd4-9819-4673-b716-fa13e51853e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 12:25:09.447231  166151 system_pods.go:61] "storage-provisioner" [7f9725de-fbe1-48bf-af53-3e9b74c8c8fe] Running
	I0729 12:25:09.447240  166151 system_pods.go:74] duration metric: took 7.480628ms to wait for pod list to return data ...
	I0729 12:25:09.447251  166151 kubeadm.go:582] duration metric: took 280.980694ms to wait for: map[apiserver:true system_pods:true]
	I0729 12:25:09.447268  166151 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:25:09.451212  166151 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:25:09.451236  166151 node_conditions.go:123] node cpu capacity is 2
	I0729 12:25:09.451244  166151 node_conditions.go:105] duration metric: took 3.972524ms to run NodePressure ...
	I0729 12:25:09.451258  166151 start.go:241] waiting for startup goroutines ...
	I0729 12:25:09.560311  166151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 12:25:09.587249  166151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 12:25:10.323463  166151 main.go:141] libmachine: Making call to close driver server
	I0729 12:25:10.323494  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .Close
	I0729 12:25:10.323520  166151 main.go:141] libmachine: Making call to close driver server
	I0729 12:25:10.323539  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .Close
	I0729 12:25:10.323828  166151 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:25:10.323838  166151 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:25:10.323843  166151 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:25:10.323849  166151 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:25:10.323853  166151 main.go:141] libmachine: Making call to close driver server
	I0729 12:25:10.323858  166151 main.go:141] libmachine: Making call to close driver server
	I0729 12:25:10.323865  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .Close
	I0729 12:25:10.323866  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .Close
	I0729 12:25:10.324141  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Closing plugin on server side
	I0729 12:25:10.324146  166151 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:25:10.324161  166151 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:25:10.324411  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Closing plugin on server side
	I0729 12:25:10.324439  166151 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:25:10.324458  166151 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:25:10.331801  166151 main.go:141] libmachine: Making call to close driver server
	I0729 12:25:10.331819  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .Close
	I0729 12:25:10.332151  166151 main.go:141] libmachine: Successfully made call to close driver server
	I0729 12:25:10.332178  166151 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 12:25:10.332179  166151 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Closing plugin on server side
	I0729 12:25:10.334160  166151 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 12:25:10.335407  166151 addons.go:510] duration metric: took 1.169119882s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 12:25:10.335445  166151 start.go:246] waiting for cluster config update ...
	I0729 12:25:10.335464  166151 start.go:255] writing updated cluster config ...
	I0729 12:25:10.335709  166151 ssh_runner.go:195] Run: rm -f paused
	I0729 12:25:10.386810  166151 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 12:25:10.388562  166151 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-714444" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.177708633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255911177680084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bca0a8c7-7764-4fff-b7a0-4d9846c44681 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.178390668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab0f26b6-3d67-486d-9822-084452455e1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.178447516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab0f26b6-3d67-486d-9822-084452455e1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.178766354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7538ce91ff80e79e0d3146097daf1d07f40d7acbdb2c691ba7cfcc4025f8b920,PodSandboxId:2fba0315f7ca360ef3e08f5be9905573769eaf201b40a87ebccc0700b83e67dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907939868675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d0e1b3ad123b783922d3fe1d745fd76d25c8527b2779c522d68e374503bcce,PodSandboxId:0fa5f3c458b88445381bfb7e441e6693eeb05e9d71825556a401c58e4d2aaf3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255907897110495,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018795e686120a2e04b433f0f94cdefefe96f7c4a1660663bc668f28d1f58ad3,PodSandboxId:974e6ebff4ae2e39c95ee4806e7d2917c47fd551b3610aeeb686f0df2d407341,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722255907930969812,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bb859721a874e803c607072fb103423d00509eaeaad7fda964e749722a529f,PodSandboxId:fda7c1ee138404668f33bebb03d3b669818f6bd692a670b32996405da38c0ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907944102073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e0
5700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c89c994ae555ce66ac156d283b66b0c1b49c138f5db5e2184f641852784f679,PodSandboxId:759954e71196815d3db8d83cbb8b7e9d835f9b220d61a9602b690728679992f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722255903064552849,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7d8d6a1024236403a0e04e04dbac9476f186c7f95ee55ad9bcda2ba93e33dc,PodSandboxId:a2c70d53ef8ad9ccefc21f2ffdd933dcd2a426bed1e935ef08ac8a318340161f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722255903039380515,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782145ac0869f9c6d5c8372038410eda6a33f15688ba1e23f36e180a5c93dd91,PodSandboxId:4029f474fed3728f0ae2aaa8ba59d51f0c36811c3a22df985420edb18a7c43c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722255903072184867,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbd8be6da4e21b61fdf3945d633baf554c61306268ae20e3a2ae2e752c0d4d5,PodSandboxId:9e14da387467f04aed0c266f7184a874f25b72637481bfe24e9514b82c0eab3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722255903049314190,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6582604c13ef5a0b73afb9fb1d971b3cb6421dc0800ae1d55f1e511725d029c4,PodSandboxId:1e922bb389186d443813d48da68d2c5b5dfe1084b7a4c2b89af014e82658dafc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898495673025,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62754312f3ccd8ef3cedc64bc7f9324ca38c1017555c1e8591e1c7beae0f0f0d,PodSandboxId:dec324b9b737f0e5584add664d3b385cf867d2925ac4f8c4d6cce8181f6e2eaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898285221877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e05700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b5520d895559d401e983d4db3ca0b22294029fa4c22aa76d423c57efb36a6d,PodSandboxId:7391b83617f309a2d831042dae470b2efc61e9eedd17d5f
6856f8b1470556245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722255897516054879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b12e30c4c718212372a59a82f1bfd6ae91068ec8e87393524374c0dabad227,PodSandboxId:762fb90ee412b549bbb44700ea706252999900b5288e3044bb3e0
814a46202dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722255897397430077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4daa78fb1d843b530f4e5975e4649a0ec8b3f5e4d60e398395f72fe087f7f485,PodSandboxId:fc68e58d13c553464958b0eb4c0196b9e359e73f1f517c0ac2a466e70fd
64b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722255897465477861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07118482ae4fe89cd35ea974eada35547ac43713e48d831554e51950851f5a4d,PodSandboxId:0004292b462dd4707f4457f24bf0262479ebe44130184f7a3fe58e0f2ed984a4,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722255897235227777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0bae55eec9c03a97917c90edf41af6be4115955b76ff8d3f5e5c8e90a13ff,PodSandboxId:7f7d98bd902e2bf905dcec8909e0bc94e450c8e8c2bd7b00cad17cab2ee095eb,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722255897079091985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61aa7f77c05b255837118cb67836d7b7d7cf190de121709fc4f2a80601128401,PodSandboxId:b50732ccc81cc4d778f34fa4780aff43d1d98b2b2c35a22d48130c14a78fe050,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255882559588808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab0f26b6-3d67-486d-9822-084452455e1f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.226663846Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4ee3ea4-e05e-49da-bcdf-1d6115c46bfb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.226760471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4ee3ea4-e05e-49da-bcdf-1d6115c46bfb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.228230690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35da3f0c-3d6e-45b6-83fa-a298f869edfc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.228624897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255911228596983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35da3f0c-3d6e-45b6-83fa-a298f869edfc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.229820336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55e36706-7e21-4645-b547-cb40c36d50f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.229925675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55e36706-7e21-4645-b547-cb40c36d50f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.230566118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7538ce91ff80e79e0d3146097daf1d07f40d7acbdb2c691ba7cfcc4025f8b920,PodSandboxId:2fba0315f7ca360ef3e08f5be9905573769eaf201b40a87ebccc0700b83e67dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907939868675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d0e1b3ad123b783922d3fe1d745fd76d25c8527b2779c522d68e374503bcce,PodSandboxId:0fa5f3c458b88445381bfb7e441e6693eeb05e9d71825556a401c58e4d2aaf3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255907897110495,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018795e686120a2e04b433f0f94cdefefe96f7c4a1660663bc668f28d1f58ad3,PodSandboxId:974e6ebff4ae2e39c95ee4806e7d2917c47fd551b3610aeeb686f0df2d407341,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722255907930969812,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bb859721a874e803c607072fb103423d00509eaeaad7fda964e749722a529f,PodSandboxId:fda7c1ee138404668f33bebb03d3b669818f6bd692a670b32996405da38c0ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907944102073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e0
5700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c89c994ae555ce66ac156d283b66b0c1b49c138f5db5e2184f641852784f679,PodSandboxId:759954e71196815d3db8d83cbb8b7e9d835f9b220d61a9602b690728679992f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722255903064552849,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7d8d6a1024236403a0e04e04dbac9476f186c7f95ee55ad9bcda2ba93e33dc,PodSandboxId:a2c70d53ef8ad9ccefc21f2ffdd933dcd2a426bed1e935ef08ac8a318340161f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722255903039380515,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782145ac0869f9c6d5c8372038410eda6a33f15688ba1e23f36e180a5c93dd91,PodSandboxId:4029f474fed3728f0ae2aaa8ba59d51f0c36811c3a22df985420edb18a7c43c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722255903072184867,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbd8be6da4e21b61fdf3945d633baf554c61306268ae20e3a2ae2e752c0d4d5,PodSandboxId:9e14da387467f04aed0c266f7184a874f25b72637481bfe24e9514b82c0eab3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722255903049314190,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6582604c13ef5a0b73afb9fb1d971b3cb6421dc0800ae1d55f1e511725d029c4,PodSandboxId:1e922bb389186d443813d48da68d2c5b5dfe1084b7a4c2b89af014e82658dafc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898495673025,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62754312f3ccd8ef3cedc64bc7f9324ca38c1017555c1e8591e1c7beae0f0f0d,PodSandboxId:dec324b9b737f0e5584add664d3b385cf867d2925ac4f8c4d6cce8181f6e2eaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898285221877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e05700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b5520d895559d401e983d4db3ca0b22294029fa4c22aa76d423c57efb36a6d,PodSandboxId:7391b83617f309a2d831042dae470b2efc61e9eedd17d5f
6856f8b1470556245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722255897516054879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b12e30c4c718212372a59a82f1bfd6ae91068ec8e87393524374c0dabad227,PodSandboxId:762fb90ee412b549bbb44700ea706252999900b5288e3044bb3e0
814a46202dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722255897397430077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4daa78fb1d843b530f4e5975e4649a0ec8b3f5e4d60e398395f72fe087f7f485,PodSandboxId:fc68e58d13c553464958b0eb4c0196b9e359e73f1f517c0ac2a466e70fd
64b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722255897465477861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07118482ae4fe89cd35ea974eada35547ac43713e48d831554e51950851f5a4d,PodSandboxId:0004292b462dd4707f4457f24bf0262479ebe44130184f7a3fe58e0f2ed984a4,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722255897235227777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0bae55eec9c03a97917c90edf41af6be4115955b76ff8d3f5e5c8e90a13ff,PodSandboxId:7f7d98bd902e2bf905dcec8909e0bc94e450c8e8c2bd7b00cad17cab2ee095eb,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722255897079091985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61aa7f77c05b255837118cb67836d7b7d7cf190de121709fc4f2a80601128401,PodSandboxId:b50732ccc81cc4d778f34fa4780aff43d1d98b2b2c35a22d48130c14a78fe050,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255882559588808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55e36706-7e21-4645-b547-cb40c36d50f8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.288292653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6aad4c8f-5e67-4956-a8f0-4c5230c8305a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.288400636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6aad4c8f-5e67-4956-a8f0-4c5230c8305a name=/runtime.v1.RuntimeService/Version
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.289656341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0128174c-315f-46f8-9bc6-5b22a70f4aa7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.290021744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255911289990367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0128174c-315f-46f8-9bc6-5b22a70f4aa7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.290776863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3c5a3e3-6259-46fd-9746-72153a2cc55e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.290872626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3c5a3e3-6259-46fd-9746-72153a2cc55e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.291565771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7538ce91ff80e79e0d3146097daf1d07f40d7acbdb2c691ba7cfcc4025f8b920,PodSandboxId:2fba0315f7ca360ef3e08f5be9905573769eaf201b40a87ebccc0700b83e67dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907939868675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d0e1b3ad123b783922d3fe1d745fd76d25c8527b2779c522d68e374503bcce,PodSandboxId:0fa5f3c458b88445381bfb7e441e6693eeb05e9d71825556a401c58e4d2aaf3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255907897110495,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018795e686120a2e04b433f0f94cdefefe96f7c4a1660663bc668f28d1f58ad3,PodSandboxId:974e6ebff4ae2e39c95ee4806e7d2917c47fd551b3610aeeb686f0df2d407341,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722255907930969812,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bb859721a874e803c607072fb103423d00509eaeaad7fda964e749722a529f,PodSandboxId:fda7c1ee138404668f33bebb03d3b669818f6bd692a670b32996405da38c0ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907944102073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e0
5700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c89c994ae555ce66ac156d283b66b0c1b49c138f5db5e2184f641852784f679,PodSandboxId:759954e71196815d3db8d83cbb8b7e9d835f9b220d61a9602b690728679992f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722255903064552849,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7d8d6a1024236403a0e04e04dbac9476f186c7f95ee55ad9bcda2ba93e33dc,PodSandboxId:a2c70d53ef8ad9ccefc21f2ffdd933dcd2a426bed1e935ef08ac8a318340161f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722255903039380515,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782145ac0869f9c6d5c8372038410eda6a33f15688ba1e23f36e180a5c93dd91,PodSandboxId:4029f474fed3728f0ae2aaa8ba59d51f0c36811c3a22df985420edb18a7c43c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722255903072184867,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbd8be6da4e21b61fdf3945d633baf554c61306268ae20e3a2ae2e752c0d4d5,PodSandboxId:9e14da387467f04aed0c266f7184a874f25b72637481bfe24e9514b82c0eab3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722255903049314190,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6582604c13ef5a0b73afb9fb1d971b3cb6421dc0800ae1d55f1e511725d029c4,PodSandboxId:1e922bb389186d443813d48da68d2c5b5dfe1084b7a4c2b89af014e82658dafc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898495673025,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62754312f3ccd8ef3cedc64bc7f9324ca38c1017555c1e8591e1c7beae0f0f0d,PodSandboxId:dec324b9b737f0e5584add664d3b385cf867d2925ac4f8c4d6cce8181f6e2eaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898285221877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e05700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b5520d895559d401e983d4db3ca0b22294029fa4c22aa76d423c57efb36a6d,PodSandboxId:7391b83617f309a2d831042dae470b2efc61e9eedd17d5f
6856f8b1470556245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722255897516054879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b12e30c4c718212372a59a82f1bfd6ae91068ec8e87393524374c0dabad227,PodSandboxId:762fb90ee412b549bbb44700ea706252999900b5288e3044bb3e0
814a46202dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722255897397430077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4daa78fb1d843b530f4e5975e4649a0ec8b3f5e4d60e398395f72fe087f7f485,PodSandboxId:fc68e58d13c553464958b0eb4c0196b9e359e73f1f517c0ac2a466e70fd
64b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722255897465477861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07118482ae4fe89cd35ea974eada35547ac43713e48d831554e51950851f5a4d,PodSandboxId:0004292b462dd4707f4457f24bf0262479ebe44130184f7a3fe58e0f2ed984a4,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722255897235227777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0bae55eec9c03a97917c90edf41af6be4115955b76ff8d3f5e5c8e90a13ff,PodSandboxId:7f7d98bd902e2bf905dcec8909e0bc94e450c8e8c2bd7b00cad17cab2ee095eb,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722255897079091985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61aa7f77c05b255837118cb67836d7b7d7cf190de121709fc4f2a80601128401,PodSandboxId:b50732ccc81cc4d778f34fa4780aff43d1d98b2b2c35a22d48130c14a78fe050,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255882559588808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3c5a3e3-6259-46fd-9746-72153a2cc55e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.326295932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29dc327f-8c5e-4f06-86f7-ad8a258f9d3f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.326375004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29dc327f-8c5e-4f06-86f7-ad8a258f9d3f name=/runtime.v1.RuntimeService/Version
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.328318459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c631f6be-710a-4df8-909f-b3856396b6a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.328685962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255911328664627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c631f6be-710a-4df8-909f-b3856396b6a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.329174996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1a97f80-488b-4521-8f62-408016017aa5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.329228176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1a97f80-488b-4521-8f62-408016017aa5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:25:11 kubernetes-upgrade-714444 crio[3012]: time="2024-07-29 12:25:11.329650766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7538ce91ff80e79e0d3146097daf1d07f40d7acbdb2c691ba7cfcc4025f8b920,PodSandboxId:2fba0315f7ca360ef3e08f5be9905573769eaf201b40a87ebccc0700b83e67dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907939868675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d0e1b3ad123b783922d3fe1d745fd76d25c8527b2779c522d68e374503bcce,PodSandboxId:0fa5f3c458b88445381bfb7e441e6693eeb05e9d71825556a401c58e4d2aaf3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722255907897110495,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018795e686120a2e04b433f0f94cdefefe96f7c4a1660663bc668f28d1f58ad3,PodSandboxId:974e6ebff4ae2e39c95ee4806e7d2917c47fd551b3610aeeb686f0df2d407341,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722255907930969812,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5bb859721a874e803c607072fb103423d00509eaeaad7fda964e749722a529f,PodSandboxId:fda7c1ee138404668f33bebb03d3b669818f6bd692a670b32996405da38c0ba9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255907944102073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e0
5700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c89c994ae555ce66ac156d283b66b0c1b49c138f5db5e2184f641852784f679,PodSandboxId:759954e71196815d3db8d83cbb8b7e9d835f9b220d61a9602b690728679992f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722255903064552849,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7d8d6a1024236403a0e04e04dbac9476f186c7f95ee55ad9bcda2ba93e33dc,PodSandboxId:a2c70d53ef8ad9ccefc21f2ffdd933dcd2a426bed1e935ef08ac8a318340161f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722255903039380515,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782145ac0869f9c6d5c8372038410eda6a33f15688ba1e23f36e180a5c93dd91,PodSandboxId:4029f474fed3728f0ae2aaa8ba59d51f0c36811c3a22df985420edb18a7c43c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722255903072184867,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbd8be6da4e21b61fdf3945d633baf554c61306268ae20e3a2ae2e752c0d4d5,PodSandboxId:9e14da387467f04aed0c266f7184a874f25b72637481bfe24e9514b82c0eab3a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722255903049314190,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6582604c13ef5a0b73afb9fb1d971b3cb6421dc0800ae1d55f1e511725d029c4,PodSandboxId:1e922bb389186d443813d48da68d2c5b5dfe1084b7a4c2b89af014e82658dafc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898495673025,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2lb5s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe57d5b2-eb3b-4cd4-a674-557219e1f61b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62754312f3ccd8ef3cedc64bc7f9324ca38c1017555c1e8591e1c7beae0f0f0d,PodSandboxId:dec324b9b737f0e5584add664d3b385cf867d2925ac4f8c4d6cce8181f6e2eaf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255898285221877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-x6qg5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0991ad10-fc32-49c0-8ad1-e05700551746,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76b5520d895559d401e983d4db3ca0b22294029fa4c22aa76d423c57efb36a6d,PodSandboxId:7391b83617f309a2d831042dae470b2efc61e9eedd17d5f
6856f8b1470556245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722255897516054879,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8ef977efae57b3b8ff64c91ab26c65,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2b12e30c4c718212372a59a82f1bfd6ae91068ec8e87393524374c0dabad227,PodSandboxId:762fb90ee412b549bbb44700ea706252999900b5288e3044bb3e0
814a46202dc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722255897397430077,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc56a825d8d0add13b84326c9a731c07,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4daa78fb1d843b530f4e5975e4649a0ec8b3f5e4d60e398395f72fe087f7f485,PodSandboxId:fc68e58d13c553464958b0eb4c0196b9e359e73f1f517c0ac2a466e70fd
64b76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722255897465477861,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7c99c88a9b8aff5e80aa9d1513cab3,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07118482ae4fe89cd35ea974eada35547ac43713e48d831554e51950851f5a4d,PodSandboxId:0004292b462dd4707f4457f24bf0262479ebe44130184f7a3fe58e0f2ed984a4,Metadata:&ContainerMetadata{Nam
e:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722255897235227777,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-714444,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4967a29c8370ca48097c2f6dfa5b28f,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70f0bae55eec9c03a97917c90edf41af6be4115955b76ff8d3f5e5c8e90a13ff,PodSandboxId:7f7d98bd902e2bf905dcec8909e0bc94e450c8e8c2bd7b00cad17cab2ee095eb,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722255897079091985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-62xt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d936b416-5e70-4b27-90f8-18171b944aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61aa7f77c05b255837118cb67836d7b7d7cf190de121709fc4f2a80601128401,PodSandboxId:b50732ccc81cc4d778f34fa4780aff43d1d98b2b2c35a22d48130c14a78fe050,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722255882559588808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f9725de-fbe1-48bf-af53-3e9b74c8c8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1a97f80-488b-4521-8f62-408016017aa5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5bb859721a87       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   fda7c1ee13840       coredns-5cfdc65f69-x6qg5
	7538ce91ff80e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   2fba0315f7ca3       coredns-5cfdc65f69-2lb5s
	018795e686120       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago       Running             kube-proxy                2                   974e6ebff4ae2       kube-proxy-62xt2
	d5d0e1b3ad123       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   0fa5f3c458b88       storage-provisioner
	782145ac0869f       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   8 seconds ago       Running             kube-apiserver            2                   4029f474fed37       kube-apiserver-kubernetes-upgrade-714444
	4c89c994ae555       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   8 seconds ago       Running             kube-scheduler            2                   759954e711968       kube-scheduler-kubernetes-upgrade-714444
	7fbd8be6da4e2       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   8 seconds ago       Running             kube-controller-manager   2                   9e14da387467f       kube-controller-manager-kubernetes-upgrade-714444
	7e7d8d6a10242       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   8 seconds ago       Running             etcd                      2                   a2c70d53ef8ad       etcd-kubernetes-upgrade-714444
	6582604c13ef5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 seconds ago      Exited              coredns                   1                   1e922bb389186       coredns-5cfdc65f69-2lb5s
	62754312f3ccd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago      Exited              coredns                   1                   dec324b9b737f       coredns-5cfdc65f69-x6qg5
	76b5520d89555       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   13 seconds ago      Exited              kube-apiserver            1                   7391b83617f30       kube-apiserver-kubernetes-upgrade-714444
	4daa78fb1d843       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   13 seconds ago      Exited              etcd                      1                   fc68e58d13c55       etcd-kubernetes-upgrade-714444
	b2b12e30c4c71       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   14 seconds ago      Exited              kube-scheduler            1                   762fb90ee412b       kube-scheduler-kubernetes-upgrade-714444
	07118482ae4fe       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   14 seconds ago      Exited              kube-controller-manager   1                   0004292b462dd       kube-controller-manager-kubernetes-upgrade-714444
	70f0bae55eec9       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 seconds ago      Exited              kube-proxy                1                   7f7d98bd902e2       kube-proxy-62xt2
	61aa7f77c05b2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   28 seconds ago      Exited              storage-provisioner       1                   b50732ccc81cc       storage-provisioner
	
	
	==> coredns [62754312f3ccd8ef3cedc64bc7f9324ca38c1017555c1e8591e1c7beae0f0f0d] <==
	
	
	==> coredns [6582604c13ef5a0b73afb9fb1d971b3cb6421dc0800ae1d55f1e511725d029c4] <==
	
	
	==> coredns [7538ce91ff80e79e0d3146097daf1d07f40d7acbdb2c691ba7cfcc4025f8b920] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d5bb859721a874e803c607072fb103423d00509eaeaad7fda964e749722a529f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-714444
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-714444
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:24:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-714444
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:25:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:25:06 +0000   Mon, 29 Jul 2024 12:24:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:25:06 +0000   Mon, 29 Jul 2024 12:24:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:25:06 +0000   Mon, 29 Jul 2024 12:24:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:25:06 +0000   Mon, 29 Jul 2024 12:24:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    kubernetes-upgrade-714444
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9371e57bc7b346259d75c4796dd71949
	  System UUID:                9371e57b-c7b3-4625-9d75-c4796dd71949
	  Boot ID:                    09cca347-9504-4d7c-a564-c05f2ac19ebb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-2lb5s                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 coredns-5cfdc65f69-x6qg5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 etcd-kubernetes-upgrade-714444                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         36s
	  kube-system                 kube-apiserver-kubernetes-upgrade-714444             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-714444    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-proxy-62xt2                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-714444             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 42s)  kubelet          Node kubernetes-upgrade-714444 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     41s (x7 over 42s)  kubelet          Node kubernetes-upgrade-714444 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    41s (x8 over 42s)  kubelet          Node kubernetes-upgrade-714444 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           30s                node-controller  Node kubernetes-upgrade-714444 event: Registered Node kubernetes-upgrade-714444 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-714444 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-714444 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-714444 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-714444 event: Registered Node kubernetes-upgrade-714444 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.551913] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.068234] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067017] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.179000] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.133906] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.283348] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +4.261956] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.074082] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.186313] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[  +7.837910] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +0.124703] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.007307] kauditd_printk_skb: 32 callbacks suppressed
	[ +14.539610] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.169765] systemd-fstab-generator[2330]: Ignoring "noauto" option for root device
	[  +0.377699] systemd-fstab-generator[2489]: Ignoring "noauto" option for root device
	[  +0.288832] systemd-fstab-generator[2607]: Ignoring "noauto" option for root device
	[  +0.331823] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.755815] systemd-fstab-generator[2947]: Ignoring "noauto" option for root device
	[Jul29 12:25] systemd-fstab-generator[3794]: Ignoring "noauto" option for root device
	[  +1.776954] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[  +0.094264] kauditd_printk_skb: 296 callbacks suppressed
	[  +5.673727] kauditd_printk_skb: 40 callbacks suppressed
	[  +1.162915] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	
	
	==> etcd [4daa78fb1d843b530f4e5975e4649a0ec8b3f5e4d60e398395f72fe087f7f485] <==
	{"level":"warn","ts":"2024-07-29T12:24:58.35791Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T12:24:58.358094Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.36:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.36:2380","--initial-cluster=kubernetes-upgrade-714444=https://192.168.50.36:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.36:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.36:2380","--name=kubernetes-upgrade-714444","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot
-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-07-29T12:24:58.358291Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-07-29T12:24:58.358336Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T12:24:58.358391Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.36:2380"]}
	{"level":"info","ts":"2024-07-29T12:24:58.358437Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:24:58.37052Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.36:2379"]}
	{"level":"info","ts":"2024-07-29T12:24:58.370902Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.14","git-sha":"bf51a53a7","go-version":"go1.21.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-714444","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.36:2380"],"listen-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","i
nitial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-29T12:24:58.442096Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"70.766328ms"}
	{"level":"info","ts":"2024-07-29T12:24:58.550892Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T12:24:58.628306Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","commit-index":408}
	{"level":"info","ts":"2024-07-29T12:24:58.630424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T12:24:58.630555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became follower at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:58.631209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5487579cc149d4d [peers: [], term: 2, commit: 408, applied: 0, lastindex: 408, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T12:24:58.690298Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T12:24:58.76645Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":398}
	
	
	==> etcd [7e7d8d6a1024236403a0e04e04dbac9476f186c7f95ee55ad9bcda2ba93e33dc] <==
	{"level":"info","ts":"2024-07-29T12:25:03.42557Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","added-peer-id":"e5487579cc149d4d","added-peer-peer-urls":["https://192.168.50.36:2380"]}
	{"level":"info","ts":"2024-07-29T12:25:03.42572Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:25:03.425793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:25:03.429641Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T12:25:03.431824Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:25:03.432441Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e5487579cc149d4d","initial-advertise-peer-urls":["https://192.168.50.36:2380"],"listen-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T12:25:03.432488Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T12:25:03.43266Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2024-07-29T12:25:03.432694Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2024-07-29T12:25:05.196122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T12:25:05.196252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T12:25:05.196303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d received MsgPreVoteResp from e5487579cc149d4d at term 2"}
	{"level":"info","ts":"2024-07-29T12:25:05.196323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T12:25:05.196332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d received MsgVoteResp from e5487579cc149d4d at term 3"}
	{"level":"info","ts":"2024-07-29T12:25:05.196344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became leader at term 3"}
	{"level":"info","ts":"2024-07-29T12:25:05.196355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5487579cc149d4d elected leader e5487579cc149d4d at term 3"}
	{"level":"info","ts":"2024-07-29T12:25:05.203004Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e5487579cc149d4d","local-member-attributes":"{Name:kubernetes-upgrade-714444 ClientURLs:[https://192.168.50.36:2379]}","request-path":"/0/members/e5487579cc149d4d/attributes","cluster-id":"31bd1a1c1ff06930","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:25:05.203005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:25:05.203105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:25:05.203553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:25:05.20357Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:25:05.204063Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T12:25:05.204189Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T12:25:05.20502Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.36:2379"}
	{"level":"info","ts":"2024-07-29T12:25:05.205315Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:25:11 up 1 min,  0 users,  load average: 1.60, 0.42, 0.14
	Linux kubernetes-upgrade-714444 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [76b5520d895559d401e983d4db3ca0b22294029fa4c22aa76d423c57efb36a6d] <==
	I0729 12:24:58.320312       1 options.go:228] external host was not specified, using 192.168.50.36
	I0729 12:24:58.341209       1 server.go:142] Version: v1.31.0-beta.0
	I0729 12:24:58.341258       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [782145ac0869f9c6d5c8372038410eda6a33f15688ba1e23f36e180a5c93dd91] <==
	I0729 12:25:06.834164       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:25:06.834407       1 aggregator.go:171] initial CRD sync complete...
	I0729 12:25:06.834452       1 autoregister_controller.go:144] Starting autoregister controller
	I0729 12:25:06.834505       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:25:06.834537       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:25:06.864811       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:25:06.873802       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:25:06.873924       1 policy_source.go:224] refreshing policies
	I0729 12:25:06.886717       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 12:25:06.886816       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 12:25:06.887434       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:25:06.887933       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:25:06.889709       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:25:06.890922       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:25:06.899741       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0729 12:25:06.906984       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 12:25:06.922029       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:25:07.714653       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 12:25:08.281716       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 12:25:08.974986       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 12:25:08.993087       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 12:25:09.058974       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 12:25:09.112752       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 12:25:09.126180       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 12:25:11.078799       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [07118482ae4fe89cd35ea974eada35547ac43713e48d831554e51950851f5a4d] <==
	
	
	==> kube-controller-manager [7fbd8be6da4e21b61fdf3945d633baf554c61306268ae20e3a2ae2e752c0d4d5] <==
	I0729 12:25:10.986723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-714444"
	I0729 12:25:10.988114       1 shared_informer.go:320] Caches are synced for GC
	I0729 12:25:11.004831       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:25:11.006029       1 shared_informer.go:320] Caches are synced for service account
	I0729 12:25:11.008319       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 12:25:11.030832       1 shared_informer.go:320] Caches are synced for disruption
	I0729 12:25:11.055545       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 12:25:11.056464       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 12:25:11.056604       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-714444"
	I0729 12:25:11.082974       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 12:25:11.104712       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 12:25:11.105854       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 12:25:11.189638       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 12:25:11.216482       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:25:11.228753       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 12:25:11.231660       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:25:11.247848       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:25:11.247902       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 12:25:11.254805       1 shared_informer.go:320] Caches are synced for taint
	I0729 12:25:11.254952       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 12:25:11.255007       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-714444"
	I0729 12:25:11.255039       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:25:11.255126       1 shared_informer.go:320] Caches are synced for job
	I0729 12:25:11.256499       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 12:25:11.266801       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [018795e686120a2e04b433f0f94cdefefe96f7c4a1660663bc668f28d1f58ad3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 12:25:08.452320       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 12:25:08.469770       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.36"]
	E0729 12:25:08.469856       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 12:25:08.527602       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 12:25:08.527641       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:25:08.527699       1 server_linux.go:170] "Using iptables Proxier"
	I0729 12:25:08.530916       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 12:25:08.531523       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 12:25:08.531585       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:25:08.535028       1 config.go:197] "Starting service config controller"
	I0729 12:25:08.535246       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:25:08.535322       1 config.go:104] "Starting endpoint slice config controller"
	I0729 12:25:08.535354       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:25:08.536822       1 config.go:326] "Starting node config controller"
	I0729 12:25:08.536910       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:25:08.636241       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 12:25:08.636343       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:25:08.637189       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [70f0bae55eec9c03a97917c90edf41af6be4115955b76ff8d3f5e5c8e90a13ff] <==
	
	
	==> kube-scheduler [4c89c994ae555ce66ac156d283b66b0c1b49c138f5db5e2184f641852784f679] <==
	I0729 12:25:04.065933       1 serving.go:386] Generated self-signed cert in-memory
	W0729 12:25:06.803401       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 12:25:06.803520       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:25:06.803549       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 12:25:06.803604       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 12:25:06.837393       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 12:25:06.837481       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:25:06.839503       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 12:25:06.839557       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 12:25:06.839597       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 12:25:06.839686       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 12:25:06.940312       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b2b12e30c4c718212372a59a82f1bfd6ae91068ec8e87393524374c0dabad227] <==
	
	
	==> kubelet <==
	Jul 29 12:25:02 kubernetes-upgrade-714444 kubelet[3923]: E0729 12:25:02.906828    3923 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.36:8443: connect: connection refused" node="kubernetes-upgrade-714444"
	Jul 29 12:25:03 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:03.018559    3923 scope.go:117] "RemoveContainer" containerID="4daa78fb1d843b530f4e5975e4649a0ec8b3f5e4d60e398395f72fe087f7f485"
	Jul 29 12:25:03 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:03.026331    3923 scope.go:117] "RemoveContainer" containerID="07118482ae4fe89cd35ea974eada35547ac43713e48d831554e51950851f5a4d"
	Jul 29 12:25:03 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:03.027032    3923 scope.go:117] "RemoveContainer" containerID="b2b12e30c4c718212372a59a82f1bfd6ae91068ec8e87393524374c0dabad227"
	Jul 29 12:25:03 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:03.032966    3923 scope.go:117] "RemoveContainer" containerID="76b5520d895559d401e983d4db3ca0b22294029fa4c22aa76d423c57efb36a6d"
	Jul 29 12:25:03 kubernetes-upgrade-714444 kubelet[3923]: E0729 12:25:03.206548    3923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-714444?timeout=10s\": dial tcp 192.168.50.36:8443: connect: connection refused" interval="800ms"
	Jul 29 12:25:03 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:03.308854    3923 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-714444"
	Jul 29 12:25:03 kubernetes-upgrade-714444 kubelet[3923]: E0729 12:25:03.309776    3923 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.36:8443: connect: connection refused" node="kubernetes-upgrade-714444"
	Jul 29 12:25:04 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:04.111826    3923 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-714444"
	Jul 29 12:25:06 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:06.897678    3923 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-714444"
	Jul 29 12:25:06 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:06.898122    3923 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-714444"
	Jul 29 12:25:06 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:06.898241    3923 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 12:25:06 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:06.899630    3923 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: E0729 12:25:07.240950    3923 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-714444\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-714444"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: E0729 12:25:07.388470    3923 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-714444\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-714444"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.576943    3923 apiserver.go:52] "Watching apiserver"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.602503    3923 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.628113    3923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7f9725de-fbe1-48bf-af53-3e9b74c8c8fe-tmp\") pod \"storage-provisioner\" (UID: \"7f9725de-fbe1-48bf-af53-3e9b74c8c8fe\") " pod="kube-system/storage-provisioner"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.629527    3923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d936b416-5e70-4b27-90f8-18171b944aa7-xtables-lock\") pod \"kube-proxy-62xt2\" (UID: \"d936b416-5e70-4b27-90f8-18171b944aa7\") " pod="kube-system/kube-proxy-62xt2"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.630237    3923 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d936b416-5e70-4b27-90f8-18171b944aa7-lib-modules\") pod \"kube-proxy-62xt2\" (UID: \"d936b416-5e70-4b27-90f8-18171b944aa7\") " pod="kube-system/kube-proxy-62xt2"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.883482    3923 scope.go:117] "RemoveContainer" containerID="61aa7f77c05b255837118cb67836d7b7d7cf190de121709fc4f2a80601128401"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.886190    3923 scope.go:117] "RemoveContainer" containerID="62754312f3ccd8ef3cedc64bc7f9324ca38c1017555c1e8591e1c7beae0f0f0d"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.891478    3923 scope.go:117] "RemoveContainer" containerID="6582604c13ef5a0b73afb9fb1d971b3cb6421dc0800ae1d55f1e511725d029c4"
	Jul 29 12:25:07 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:07.892327    3923 scope.go:117] "RemoveContainer" containerID="70f0bae55eec9c03a97917c90edf41af6be4115955b76ff8d3f5e5c8e90a13ff"
	Jul 29 12:25:09 kubernetes-upgrade-714444 kubelet[3923]: I0729 12:25:09.846969    3923 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [61aa7f77c05b255837118cb67836d7b7d7cf190de121709fc4f2a80601128401] <==
	I0729 12:24:42.752041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 12:24:42.759019       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d5d0e1b3ad123b783922d3fe1d745fd76d25c8527b2779c522d68e374503bcce] <==
	I0729 12:25:08.227428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 12:25:08.264082       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 12:25:08.264178       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 12:25:08.297988       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"08312d7b-f34f-46b1-8328-1afe6d4dc228", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-714444_2d36817e-36cd-47b1-b8fa-85cd4816bc61 became leader
	I0729 12:25:08.299864       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 12:25:08.300182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-714444_2d36817e-36cd-47b1-b8fa-85cd4816bc61!
	I0729 12:25:08.401341       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-714444_2d36817e-36cd-47b1-b8fa-85cd4816bc61!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-714444 -n kubernetes-upgrade-714444
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-714444 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-714444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-714444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-714444: (1.107533631s)
--- FAIL: TestKubernetesUpgrade (379.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-737279 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-737279 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.878932847s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-737279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-737279" primary control-plane node in "pause-737279" cluster
	* Updating the running kvm2 "pause-737279" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-737279" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:23:38.947515  164458 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:23:38.947788  164458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:23:38.947825  164458 out.go:304] Setting ErrFile to fd 2...
	I0729 12:23:38.947840  164458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:23:38.948153  164458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:23:38.948940  164458 out.go:298] Setting JSON to false
	I0729 12:23:38.950456  164458 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7570,"bootTime":1722248249,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:23:38.950580  164458 start.go:139] virtualization: kvm guest
	I0729 12:23:38.952481  164458 out.go:177] * [pause-737279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:23:38.954118  164458 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:23:38.954255  164458 notify.go:220] Checking for updates...
	I0729 12:23:38.956749  164458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:23:38.958619  164458 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:23:38.961030  164458 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:23:38.962501  164458 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:23:38.963861  164458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:23:38.965821  164458 config.go:182] Loaded profile config "pause-737279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:23:38.966451  164458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:23:38.966507  164458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:23:38.987715  164458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0729 12:23:38.988453  164458 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:23:38.989319  164458 main.go:141] libmachine: Using API Version  1
	I0729 12:23:38.989353  164458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:23:38.989888  164458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:23:38.990093  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:23:38.990383  164458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:23:38.990832  164458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:23:38.990932  164458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:23:39.014988  164458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0729 12:23:39.015654  164458 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:23:39.016359  164458 main.go:141] libmachine: Using API Version  1
	I0729 12:23:39.016375  164458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:23:39.016727  164458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:23:39.016947  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:23:39.057643  164458 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 12:23:39.058985  164458 start.go:297] selected driver: kvm2
	I0729 12:23:39.059005  164458 start.go:901] validating driver "kvm2" against &{Name:pause-737279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-737279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:23:39.059242  164458 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:23:39.059734  164458 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:23:39.059840  164458 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:23:39.076753  164458 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:23:39.077656  164458 cni.go:84] Creating CNI manager for ""
	I0729 12:23:39.077674  164458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:23:39.077737  164458 start.go:340] cluster config:
	{Name:pause-737279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-737279 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:23:39.077932  164458 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:23:39.080387  164458 out.go:177] * Starting "pause-737279" primary control-plane node in "pause-737279" cluster
	I0729 12:23:39.081590  164458 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:23:39.081630  164458 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:23:39.081641  164458 cache.go:56] Caching tarball of preloaded images
	I0729 12:23:39.081715  164458 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:23:39.081726  164458 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:23:39.081841  164458 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/config.json ...
	I0729 12:23:39.082029  164458 start.go:360] acquireMachinesLock for pause-737279: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:23:52.081832  164458 start.go:364] duration metric: took 12.999775633s to acquireMachinesLock for "pause-737279"
	I0729 12:23:52.081899  164458 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:23:52.081911  164458 fix.go:54] fixHost starting: 
	I0729 12:23:52.082315  164458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:23:52.082361  164458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:23:52.099845  164458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0729 12:23:52.100307  164458 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:23:52.100791  164458 main.go:141] libmachine: Using API Version  1
	I0729 12:23:52.100815  164458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:23:52.101225  164458 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:23:52.101441  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:23:52.101579  164458 main.go:141] libmachine: (pause-737279) Calling .GetState
	I0729 12:23:52.103225  164458 fix.go:112] recreateIfNeeded on pause-737279: state=Running err=<nil>
	W0729 12:23:52.103248  164458 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 12:23:52.105359  164458 out.go:177] * Updating the running kvm2 "pause-737279" VM ...
	I0729 12:23:52.106889  164458 machine.go:94] provisionDockerMachine start ...
	I0729 12:23:52.106918  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:23:52.107174  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:23:52.109906  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.110463  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:23:52.110504  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.110643  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:23:52.110843  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.110996  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.111140  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:23:52.111343  164458 main.go:141] libmachine: Using SSH client type: native
	I0729 12:23:52.111625  164458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0729 12:23:52.111640  164458 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:23:52.214132  164458 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-737279
	
	I0729 12:23:52.214172  164458 main.go:141] libmachine: (pause-737279) Calling .GetMachineName
	I0729 12:23:52.214510  164458 buildroot.go:166] provisioning hostname "pause-737279"
	I0729 12:23:52.214545  164458 main.go:141] libmachine: (pause-737279) Calling .GetMachineName
	I0729 12:23:52.214794  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:23:52.217414  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.217771  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:23:52.217796  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.217930  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:23:52.218149  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.218311  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.218444  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:23:52.218616  164458 main.go:141] libmachine: Using SSH client type: native
	I0729 12:23:52.218829  164458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0729 12:23:52.218845  164458 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-737279 && echo "pause-737279" | sudo tee /etc/hostname
	I0729 12:23:52.334649  164458 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-737279
	
	I0729 12:23:52.334675  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:23:52.337706  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.338070  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:23:52.338101  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.338268  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:23:52.338498  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.338682  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.338834  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:23:52.338975  164458 main.go:141] libmachine: Using SSH client type: native
	I0729 12:23:52.339215  164458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0729 12:23:52.339233  164458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-737279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-737279/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-737279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:23:52.441906  164458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:23:52.441950  164458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 12:23:52.441997  164458 buildroot.go:174] setting up certificates
	I0729 12:23:52.442008  164458 provision.go:84] configureAuth start
	I0729 12:23:52.442023  164458 main.go:141] libmachine: (pause-737279) Calling .GetMachineName
	I0729 12:23:52.442320  164458 main.go:141] libmachine: (pause-737279) Calling .GetIP
	I0729 12:23:52.445615  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.446074  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:23:52.446096  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.446284  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:23:52.448739  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.449196  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:23:52.449234  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.449428  164458 provision.go:143] copyHostCerts
	I0729 12:23:52.449489  164458 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 12:23:52.449502  164458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:23:52.449575  164458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 12:23:52.449689  164458 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 12:23:52.449708  164458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:23:52.449742  164458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 12:23:52.449881  164458 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 12:23:52.449895  164458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:23:52.449934  164458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 12:23:52.450037  164458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.pause-737279 san=[127.0.0.1 192.168.39.61 localhost minikube pause-737279]
	I0729 12:23:52.816449  164458 provision.go:177] copyRemoteCerts
	I0729 12:23:52.816548  164458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:23:52.816581  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:23:52.819530  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.819926  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:23:52.819962  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.820127  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:23:52.820385  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.820592  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:23:52.820774  164458 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/pause-737279/id_rsa Username:docker}
	I0729 12:23:52.907032  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 12:23:52.937590  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 12:23:52.964169  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:23:52.992693  164458 provision.go:87] duration metric: took 550.66693ms to configureAuth
	I0729 12:23:52.992726  164458 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:23:52.992984  164458 config.go:182] Loaded profile config "pause-737279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:23:52.993117  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:23:52.996403  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.996762  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:23:52.996790  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:23:52.997066  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:23:52.997299  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.997516  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:23:52.997686  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:23:52.997864  164458 main.go:141] libmachine: Using SSH client type: native
	I0729 12:23:52.998118  164458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0729 12:23:52.998140  164458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:24:01.256367  164458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:24:01.256408  164458 machine.go:97] duration metric: took 9.149498956s to provisionDockerMachine
	I0729 12:24:01.256427  164458 start.go:293] postStartSetup for "pause-737279" (driver="kvm2")
	I0729 12:24:01.256443  164458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:24:01.256475  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:24:01.256851  164458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:24:01.256882  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:24:01.259469  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.259830  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:24:01.259862  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.260050  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:24:01.260265  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:24:01.260414  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:24:01.260550  164458 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/pause-737279/id_rsa Username:docker}
	I0729 12:24:01.344022  164458 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:24:01.349530  164458 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:24:01.349567  164458 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 12:24:01.349651  164458 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 12:24:01.349720  164458 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 12:24:01.349831  164458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:24:01.360829  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:24:01.389119  164458 start.go:296] duration metric: took 132.665513ms for postStartSetup
	I0729 12:24:01.389178  164458 fix.go:56] duration metric: took 9.307266634s for fixHost
	I0729 12:24:01.389218  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:24:01.391966  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.392233  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:24:01.392261  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.392444  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:24:01.392903  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:24:01.393158  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:24:01.393350  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:24:01.393591  164458 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:01.393780  164458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.61 22 <nil> <nil>}
	I0729 12:24:01.393794  164458 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 12:24:01.493636  164458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722255841.484185753
	
	I0729 12:24:01.493664  164458 fix.go:216] guest clock: 1722255841.484185753
	I0729 12:24:01.493675  164458 fix.go:229] Guest: 2024-07-29 12:24:01.484185753 +0000 UTC Remote: 2024-07-29 12:24:01.389183181 +0000 UTC m=+22.496466046 (delta=95.002572ms)
	I0729 12:24:01.493708  164458 fix.go:200] guest clock delta is within tolerance: 95.002572ms
	I0729 12:24:01.493719  164458 start.go:83] releasing machines lock for "pause-737279", held for 9.411844873s
	I0729 12:24:01.493749  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:24:01.494066  164458 main.go:141] libmachine: (pause-737279) Calling .GetIP
	I0729 12:24:01.497384  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.497949  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:24:01.497980  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.498192  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:24:01.498846  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:24:01.499050  164458 main.go:141] libmachine: (pause-737279) Calling .DriverName
	I0729 12:24:01.499125  164458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:24:01.499178  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:24:01.499316  164458 ssh_runner.go:195] Run: cat /version.json
	I0729 12:24:01.499342  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHHostname
	I0729 12:24:01.502137  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.502313  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.502532  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:24:01.502560  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.502731  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:24:01.502790  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:01.502810  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:24:01.503077  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:24:01.503088  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHPort
	I0729 12:24:01.503249  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:24:01.503311  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHKeyPath
	I0729 12:24:01.503407  164458 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/pause-737279/id_rsa Username:docker}
	I0729 12:24:01.503475  164458 main.go:141] libmachine: (pause-737279) Calling .GetSSHUsername
	I0729 12:24:01.503578  164458 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/pause-737279/id_rsa Username:docker}
	I0729 12:24:01.582406  164458 ssh_runner.go:195] Run: systemctl --version
	I0729 12:24:01.606631  164458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:24:01.755842  164458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:24:01.762105  164458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:24:01.762189  164458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:24:01.771638  164458 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 12:24:01.771668  164458 start.go:495] detecting cgroup driver to use...
	I0729 12:24:01.771738  164458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:24:01.788189  164458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:24:01.803298  164458 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:24:01.803373  164458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:24:01.818324  164458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:24:01.833768  164458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:24:02.007839  164458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:24:02.202686  164458 docker.go:233] disabling docker service ...
	I0729 12:24:02.202781  164458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:24:02.296749  164458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:24:02.388202  164458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:24:02.711159  164458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:24:03.027325  164458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:24:03.080799  164458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:24:03.143517  164458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 12:24:03.143587  164458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:03.178195  164458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:24:03.178279  164458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:03.207937  164458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:03.246207  164458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:03.288603  164458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:24:03.303061  164458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:03.318206  164458 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:03.334324  164458 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:03.350375  164458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:24:03.363730  164458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:24:03.374705  164458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:24:03.606336  164458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:24:04.176416  164458 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:24:04.176500  164458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:24:04.182336  164458 start.go:563] Will wait 60s for crictl version
	I0729 12:24:04.182413  164458 ssh_runner.go:195] Run: which crictl
	I0729 12:24:04.186680  164458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:24:04.232125  164458 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:24:04.232226  164458 ssh_runner.go:195] Run: crio --version
	I0729 12:24:04.279813  164458 ssh_runner.go:195] Run: crio --version
	I0729 12:24:04.396623  164458 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 12:24:04.397978  164458 main.go:141] libmachine: (pause-737279) Calling .GetIP
	I0729 12:24:04.401256  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:04.401887  164458 main.go:141] libmachine: (pause-737279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:4c:20", ip: ""} in network mk-pause-737279: {Iface:virbr3 ExpiryTime:2024-07-29 13:22:44 +0000 UTC Type:0 Mac:52:54:00:8d:4c:20 Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:pause-737279 Clientid:01:52:54:00:8d:4c:20}
	I0729 12:24:04.401917  164458 main.go:141] libmachine: (pause-737279) DBG | domain pause-737279 has defined IP address 192.168.39.61 and MAC address 52:54:00:8d:4c:20 in network mk-pause-737279
	I0729 12:24:04.402220  164458 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 12:24:04.414596  164458 kubeadm.go:883] updating cluster {Name:pause-737279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-737279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:24:04.414796  164458 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:24:04.414862  164458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:24:04.639614  164458 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:24:04.639639  164458 crio.go:433] Images already preloaded, skipping extraction
	I0729 12:24:04.639692  164458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:24:04.786659  164458 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:24:04.786691  164458 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:24:04.786702  164458 kubeadm.go:934] updating node { 192.168.39.61 8443 v1.30.3 crio true true} ...
	I0729 12:24:04.786871  164458 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-737279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-737279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:24:04.786970  164458 ssh_runner.go:195] Run: crio config
	I0729 12:24:04.938953  164458 cni.go:84] Creating CNI manager for ""
	I0729 12:24:04.938978  164458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:24:04.938996  164458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:24:04.939025  164458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.61 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-737279 NodeName:pause-737279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:24:04.939211  164458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-737279"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:24:04.939282  164458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 12:24:04.957666  164458 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:24:04.957840  164458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:24:04.969715  164458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 12:24:04.997433  164458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 12:24:05.027362  164458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 12:24:05.045289  164458 ssh_runner.go:195] Run: grep 192.168.39.61	control-plane.minikube.internal$ /etc/hosts
	I0729 12:24:05.049649  164458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:24:05.211852  164458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:24:05.229616  164458 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279 for IP: 192.168.39.61
	I0729 12:24:05.229713  164458 certs.go:194] generating shared ca certs ...
	I0729 12:24:05.229750  164458 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:05.229987  164458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 12:24:05.230060  164458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 12:24:05.230095  164458 certs.go:256] generating profile certs ...
	I0729 12:24:05.230235  164458 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/client.key
	I0729 12:24:05.230317  164458 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/apiserver.key.f82af2a5
	I0729 12:24:05.230389  164458 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/proxy-client.key
	I0729 12:24:05.230574  164458 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 12:24:05.230627  164458 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 12:24:05.230647  164458 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 12:24:05.230711  164458 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:24:05.230769  164458 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:24:05.230828  164458 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 12:24:05.230919  164458 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:24:05.231766  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:24:05.261227  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:24:05.290908  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:24:05.320238  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:24:05.348405  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 12:24:05.376824  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 12:24:05.408664  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:24:05.472223  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/pause-737279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 12:24:05.501695  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:24:05.533192  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 12:24:05.562073  164458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 12:24:05.592787  164458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:24:05.612371  164458 ssh_runner.go:195] Run: openssl version
	I0729 12:24:05.619576  164458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:24:05.633581  164458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:24:05.639061  164458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:24:05.639133  164458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:24:05.646313  164458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:24:05.659300  164458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 12:24:05.673392  164458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 12:24:05.678981  164458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:24:05.679051  164458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 12:24:05.685732  164458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 12:24:05.698620  164458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 12:24:05.713232  164458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 12:24:05.718169  164458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:24:05.718292  164458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 12:24:05.725647  164458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:24:05.738501  164458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:24:05.744260  164458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:24:05.751736  164458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:24:05.761040  164458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:24:05.768557  164458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:24:05.776226  164458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:24:05.783863  164458 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:24:05.791506  164458 kubeadm.go:392] StartCluster: {Name:pause-737279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-737279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:24:05.791644  164458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:24:05.791724  164458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:24:05.840337  164458 cri.go:89] found id: "1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5"
	I0729 12:24:05.840359  164458 cri.go:89] found id: "67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff"
	I0729 12:24:05.840363  164458 cri.go:89] found id: "aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a"
	I0729 12:24:05.840366  164458 cri.go:89] found id: "4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6"
	I0729 12:24:05.840369  164458 cri.go:89] found id: "b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011"
	I0729 12:24:05.840372  164458 cri.go:89] found id: "0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7"
	I0729 12:24:05.840374  164458 cri.go:89] found id: ""
	I0729 12:24:05.840424  164458 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-737279 -n pause-737279
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-737279 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-737279 logs -n 25: (1.468811432s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-185676           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 12:19 UTC | 29 Jul 24 12:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2      |                           |         |         |                     |                     |
	|         |  --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:20 UTC | 29 Jul 24 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p offline-crio-390530              | offline-crio-390530       | jenkins | v1.33.1 | 29 Jul 24 12:20 UTC | 29 Jul 24 12:20 UTC |
	| start   | -p running-upgrade-661564           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 12:20 UTC | 29 Jul 24 12:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2      |                           |         |         |                     |                     |
	|         |  --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	| start   | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-185676 stop         | minikube                  | jenkins | v1.26.0 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	| start   | -p stopped-upgrade-185676           | stopped-upgrade-185676    | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:22 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-661564           | running-upgrade-661564    | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:23 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-390849 sudo         | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	| start   | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:22 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-390849 sudo         | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:22 UTC |
	| start   | -p pause-737279 --memory=2048       | pause-737279              | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:23 UTC |
	|         | --install-addons=false              |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2            |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-185676           | stopped-upgrade-185676    | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:22 UTC |
	| start   | -p cert-expiration-524248           | cert-expiration-524248    | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:23 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-661564           | running-upgrade-661564    | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:23 UTC |
	| start   | -p force-systemd-flag-327451        | force-systemd-flag-327451 | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:24 UTC |
	|         | --memory=2048 --force-systemd       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p pause-737279                     | pause-737279              | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:24 UTC |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-714444        | kubernetes-upgrade-714444 | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:23 UTC |
	| start   | -p kubernetes-upgrade-714444        | kubernetes-upgrade-714444 | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC |                     |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-327451 ssh cat   | force-systemd-flag-327451 | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC | 29 Jul 24 12:24 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf  |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-327451        | force-systemd-flag-327451 | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC | 29 Jul 24 12:24 UTC |
	| start   | -p cert-options-882510              | cert-options-882510       | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1           |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15       |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost         |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com    |                           |         |         |                     |                     |
	|         | --apiserver-port=8555               |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:24:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:24:13.638835  165002 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:24:13.639093  165002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:24:13.639098  165002 out.go:304] Setting ErrFile to fd 2...
	I0729 12:24:13.639102  165002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:24:13.639319  165002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:24:13.639959  165002 out.go:298] Setting JSON to false
	I0729 12:24:13.641161  165002 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7605,"bootTime":1722248249,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:24:13.641227  165002 start.go:139] virtualization: kvm guest
	I0729 12:24:13.643646  165002 out.go:177] * [cert-options-882510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:24:13.645241  165002 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:24:13.645327  165002 notify.go:220] Checking for updates...
	I0729 12:24:13.648070  165002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:24:13.649757  165002 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:24:13.651233  165002 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:24:13.652633  165002 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:24:13.654122  165002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:24:13.656063  165002 config.go:182] Loaded profile config "cert-expiration-524248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:24:13.656182  165002 config.go:182] Loaded profile config "kubernetes-upgrade-714444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:24:13.656332  165002 config.go:182] Loaded profile config "pause-737279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:24:13.656417  165002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:24:13.696716  165002 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 12:24:13.698144  165002 start.go:297] selected driver: kvm2
	I0729 12:24:13.698155  165002 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:24:13.698167  165002 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:24:13.699104  165002 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:24:13.699208  165002 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:24:13.718245  165002 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:24:13.718311  165002 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:24:13.718517  165002 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:24:13.718556  165002 cni.go:84] Creating CNI manager for ""
	I0729 12:24:13.718562  165002 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:24:13.718566  165002 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:24:13.718616  165002 start.go:340] cluster config:
	{Name:cert-options-882510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-882510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 12:24:13.718721  165002 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:24:13.721581  165002 out.go:177] * Starting "cert-options-882510" primary control-plane node in "cert-options-882510" cluster
	I0729 12:24:13.102678  164458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 12:24:13.112911  164458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 12:24:13.135921  164458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:24:13.146147  164458 system_pods.go:59] 6 kube-system pods found
	I0729 12:24:13.146207  164458 system_pods.go:61] "coredns-7db6d8ff4d-dth8w" [9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 12:24:13.146219  164458 system_pods.go:61] "etcd-pause-737279" [a3e7c2fb-1721-4e04-8b6a-5d56c739d7c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 12:24:13.146228  164458 system_pods.go:61] "kube-apiserver-pause-737279" [002c0476-a619-48b5-9ccc-418b59526917] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 12:24:13.146239  164458 system_pods.go:61] "kube-controller-manager-pause-737279" [93857787-6d50-490c-a76a-4362bd3e64a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 12:24:13.146252  164458 system_pods.go:61] "kube-proxy-g67j8" [3b82113b-7e33-4acd-80a9-21b0a7b91d13] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 12:24:13.146264  164458 system_pods.go:61] "kube-scheduler-pause-737279" [e133ec5f-b9ac-4223-be31-8723de7bb5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 12:24:13.146276  164458 system_pods.go:74] duration metric: took 10.328029ms to wait for pod list to return data ...
	I0729 12:24:13.146289  164458 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:24:13.150095  164458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:24:13.150126  164458 node_conditions.go:123] node cpu capacity is 2
	I0729 12:24:13.150141  164458 node_conditions.go:105] duration metric: took 3.845005ms to run NodePressure ...
	I0729 12:24:13.150161  164458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:24:13.438182  164458 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 12:24:13.443458  164458 kubeadm.go:739] kubelet initialised
	I0729 12:24:13.443490  164458 kubeadm.go:740] duration metric: took 5.279813ms waiting for restarted kubelet to initialise ...
	I0729 12:24:13.443501  164458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:13.452492  164458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:11.772760  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:11.773317  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:24:11.773346  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:24:11.773276  164770 retry.go:31] will retry after 1.851462555s: waiting for machine to come up
	I0729 12:24:13.626482  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:13.628010  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:24:13.628038  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:24:13.627918  164770 retry.go:31] will retry after 3.254945292s: waiting for machine to come up
	I0729 12:24:13.722936  165002 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:24:13.722978  165002 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:24:13.722992  165002 cache.go:56] Caching tarball of preloaded images
	I0729 12:24:13.723085  165002 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:24:13.723093  165002 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:24:13.723230  165002 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/cert-options-882510/config.json ...
	I0729 12:24:13.723250  165002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/cert-options-882510/config.json: {Name:mk12081fdc9868d6c05d921d5427c0fc9fb60530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:13.723424  165002 start.go:360] acquireMachinesLock for cert-options-882510: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:24:14.961735  164458 pod_ready.go:92] pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:14.961770  164458 pod_ready.go:81] duration metric: took 1.509237101s for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:14.961784  164458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:16.967888  164458 pod_ready.go:102] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:16.884852  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:16.885323  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:24:16.885351  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:24:16.885283  164770 retry.go:31] will retry after 3.290405502s: waiting for machine to come up
	I0729 12:24:20.179885  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.180378  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has current primary IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.180392  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Found IP for machine: 192.168.50.36
	I0729 12:24:20.180408  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Reserving static IP address...
	I0729 12:24:20.180875  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-714444", mac: "52:54:00:92:96:14", ip: "192.168.50.36"} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.180899  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Reserved static IP address: 192.168.50.36
	I0729 12:24:20.180917  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | skip adding static IP to network mk-kubernetes-upgrade-714444 - found existing host DHCP lease matching {name: "kubernetes-upgrade-714444", mac: "52:54:00:92:96:14", ip: "192.168.50.36"}
	I0729 12:24:20.180930  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Getting to WaitForSSH function...
	I0729 12:24:20.180945  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Waiting for SSH to be available...
	I0729 12:24:20.183232  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.183600  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.183631  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.183808  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Using SSH client type: external
	I0729 12:24:20.183829  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa (-rw-------)
	I0729 12:24:20.183869  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:24:20.183886  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | About to run SSH command:
	I0729 12:24:20.183897  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | exit 0
	I0729 12:24:20.309238  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | SSH cmd err, output: <nil>: 
	I0729 12:24:20.309637  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetConfigRaw
	I0729 12:24:20.310354  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:20.313154  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.313540  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.313575  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.313766  164647 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/config.json ...
	I0729 12:24:20.313958  164647 machine.go:94] provisionDockerMachine start ...
	I0729 12:24:20.313979  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:20.314210  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.316835  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.317225  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.317269  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.317454  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.317640  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.317810  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.317973  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.318163  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.318401  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.318415  164647 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:24:20.425370  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 12:24:20.425404  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:24:20.425687  164647 buildroot.go:166] provisioning hostname "kubernetes-upgrade-714444"
	I0729 12:24:20.425723  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:24:20.425924  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.428847  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.429366  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.429407  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.429551  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.429761  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.430058  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.430233  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.430426  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.430613  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.430625  164647 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-714444 && echo "kubernetes-upgrade-714444" | sudo tee /etc/hostname
	I0729 12:24:20.551539  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-714444
	
	I0729 12:24:20.551571  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.554290  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.554600  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.554633  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.554810  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.555053  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.555253  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.555381  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.555558  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.555782  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.555800  164647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-714444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-714444/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-714444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:24:20.669863  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:24:20.669896  164647 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 12:24:20.669959  164647 buildroot.go:174] setting up certificates
	I0729 12:24:20.669974  164647 provision.go:84] configureAuth start
	I0729 12:24:20.669992  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:24:20.670306  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:20.672860  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.673316  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.673350  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.673491  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.675993  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.676317  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.676344  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.676524  164647 provision.go:143] copyHostCerts
	I0729 12:24:20.676592  164647 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 12:24:20.676606  164647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:24:20.676674  164647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 12:24:20.676816  164647 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 12:24:20.676826  164647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:24:20.676872  164647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 12:24:20.676986  164647 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 12:24:20.676997  164647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:24:20.677028  164647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 12:24:20.677113  164647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-714444 san=[127.0.0.1 192.168.50.36 kubernetes-upgrade-714444 localhost minikube]
	I0729 12:24:20.797468  164647 provision.go:177] copyRemoteCerts
	I0729 12:24:20.797554  164647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:24:20.797595  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.800697  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.801052  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.801084  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.801275  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.801461  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.801632  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.801766  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:20.888339  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 12:24:20.912284  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:24:20.937103  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:24:20.962569  164647 provision.go:87] duration metric: took 292.57341ms to configureAuth
	I0729 12:24:20.962604  164647 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:24:20.962848  164647 config.go:182] Loaded profile config "kubernetes-upgrade-714444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:24:20.962949  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.966023  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.966463  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.966499  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.966717  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.966958  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.967152  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.967318  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.967550  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.967767  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.967791  164647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:24:21.470172  165002 start.go:364] duration metric: took 7.746678226s to acquireMachinesLock for "cert-options-882510"
	I0729 12:24:21.470224  165002 start.go:93] Provisioning new machine with config: &{Name:cert-options-882510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-882510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:24:21.470332  165002 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 12:24:21.231962  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:24:21.231993  164647 machine.go:97] duration metric: took 918.021669ms to provisionDockerMachine
	I0729 12:24:21.232006  164647 start.go:293] postStartSetup for "kubernetes-upgrade-714444" (driver="kvm2")
	I0729 12:24:21.232030  164647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:24:21.232062  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.232365  164647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:24:21.232392  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.235036  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.235363  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.235395  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.235563  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.235764  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.235964  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.236153  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:21.320619  164647 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:24:21.324846  164647 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:24:21.324886  164647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 12:24:21.325000  164647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 12:24:21.325088  164647 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 12:24:21.325217  164647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:24:21.335282  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:24:21.360735  164647 start.go:296] duration metric: took 128.695915ms for postStartSetup
	I0729 12:24:21.360789  164647 fix.go:56] duration metric: took 19.86692126s for fixHost
	I0729 12:24:21.360819  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.363722  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.364165  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.364209  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.364319  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.364584  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.364776  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.364927  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.365239  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:21.365420  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:21.365435  164647 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:24:21.469928  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722255861.440086074
	
	I0729 12:24:21.469965  164647 fix.go:216] guest clock: 1722255861.440086074
	I0729 12:24:21.469976  164647 fix.go:229] Guest: 2024-07-29 12:24:21.440086074 +0000 UTC Remote: 2024-07-29 12:24:21.360794225 +0000 UTC m=+30.400862407 (delta=79.291849ms)
	I0729 12:24:21.470010  164647 fix.go:200] guest clock delta is within tolerance: 79.291849ms
	I0729 12:24:21.470033  164647 start.go:83] releasing machines lock for "kubernetes-upgrade-714444", held for 19.976194373s
	I0729 12:24:21.470085  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.470399  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:21.473356  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.473740  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.473773  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.473863  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.474478  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.474710  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.474802  164647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:24:21.474843  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.474965  164647 ssh_runner.go:195] Run: cat /version.json
	I0729 12:24:21.474991  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.477561  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.477843  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.477990  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.478018  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.478185  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.478316  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.478347  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.478394  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.478505  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.478595  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.478649  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.478725  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:21.478818  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.478952  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:21.587800  164647 ssh_runner.go:195] Run: systemctl --version
	I0729 12:24:21.594581  164647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:24:21.750970  164647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:24:21.757026  164647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:24:21.757104  164647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:24:21.773167  164647 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:24:21.773200  164647 start.go:495] detecting cgroup driver to use...
	I0729 12:24:21.773277  164647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:24:21.790322  164647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:24:21.804998  164647 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:24:21.805076  164647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:24:21.819370  164647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:24:21.833788  164647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:24:21.959664  164647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:24:22.112585  164647 docker.go:233] disabling docker service ...
	I0729 12:24:22.112686  164647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:24:22.127012  164647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:24:22.140642  164647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:24:22.287378  164647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:24:22.397752  164647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:24:22.414110  164647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:24:22.433400  164647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 12:24:22.433465  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.444261  164647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:24:22.444334  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.455265  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.467714  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.478609  164647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:24:22.488890  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.499387  164647 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.517026  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.527767  164647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:24:22.537732  164647 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:24:22.537796  164647 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:24:22.551819  164647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:24:22.562002  164647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:24:22.687562  164647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:24:22.841785  164647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:24:22.841852  164647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:24:22.847926  164647 start.go:563] Will wait 60s for crictl version
	I0729 12:24:22.848007  164647 ssh_runner.go:195] Run: which crictl
	I0729 12:24:22.852818  164647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:24:22.894513  164647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:24:22.894626  164647 ssh_runner.go:195] Run: crio --version
	I0729 12:24:22.927644  164647 ssh_runner.go:195] Run: crio --version
	I0729 12:24:22.960302  164647 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 12:24:21.472280  165002 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 12:24:21.472494  165002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:24:21.472526  165002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:24:21.493620  165002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I0729 12:24:21.494070  165002 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:24:21.494693  165002 main.go:141] libmachine: Using API Version  1
	I0729 12:24:21.494710  165002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:24:21.495109  165002 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:24:21.495358  165002 main.go:141] libmachine: (cert-options-882510) Calling .GetMachineName
	I0729 12:24:21.495517  165002 main.go:141] libmachine: (cert-options-882510) Calling .DriverName
	I0729 12:24:21.495674  165002 start.go:159] libmachine.API.Create for "cert-options-882510" (driver="kvm2")
	I0729 12:24:21.495708  165002 client.go:168] LocalClient.Create starting
	I0729 12:24:21.495742  165002 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 12:24:21.495780  165002 main.go:141] libmachine: Decoding PEM data...
	I0729 12:24:21.495796  165002 main.go:141] libmachine: Parsing certificate...
	I0729 12:24:21.495864  165002 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 12:24:21.495882  165002 main.go:141] libmachine: Decoding PEM data...
	I0729 12:24:21.495892  165002 main.go:141] libmachine: Parsing certificate...
	I0729 12:24:21.495910  165002 main.go:141] libmachine: Running pre-create checks...
	I0729 12:24:21.495919  165002 main.go:141] libmachine: (cert-options-882510) Calling .PreCreateCheck
	I0729 12:24:21.496292  165002 main.go:141] libmachine: (cert-options-882510) Calling .GetConfigRaw
	I0729 12:24:21.496793  165002 main.go:141] libmachine: Creating machine...
	I0729 12:24:21.496804  165002 main.go:141] libmachine: (cert-options-882510) Calling .Create
	I0729 12:24:21.497044  165002 main.go:141] libmachine: (cert-options-882510) Creating KVM machine...
	I0729 12:24:21.498474  165002 main.go:141] libmachine: (cert-options-882510) DBG | found existing default KVM network
	I0729 12:24:21.500062  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.499887  165064 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:cd:0c} reservation:<nil>}
	I0729 12:24:21.501029  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.500897  165064 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6d:9b:ef} reservation:<nil>}
	I0729 12:24:21.502045  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.501957  165064 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:ef:de} reservation:<nil>}
	I0729 12:24:21.503189  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.503089  165064 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289d50}
	I0729 12:24:21.503224  165002 main.go:141] libmachine: (cert-options-882510) DBG | created network xml: 
	I0729 12:24:21.503234  165002 main.go:141] libmachine: (cert-options-882510) DBG | <network>
	I0729 12:24:21.503243  165002 main.go:141] libmachine: (cert-options-882510) DBG |   <name>mk-cert-options-882510</name>
	I0729 12:24:21.503250  165002 main.go:141] libmachine: (cert-options-882510) DBG |   <dns enable='no'/>
	I0729 12:24:21.503257  165002 main.go:141] libmachine: (cert-options-882510) DBG |   
	I0729 12:24:21.503265  165002 main.go:141] libmachine: (cert-options-882510) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0729 12:24:21.503295  165002 main.go:141] libmachine: (cert-options-882510) DBG |     <dhcp>
	I0729 12:24:21.503302  165002 main.go:141] libmachine: (cert-options-882510) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0729 12:24:21.503325  165002 main.go:141] libmachine: (cert-options-882510) DBG |     </dhcp>
	I0729 12:24:21.503335  165002 main.go:141] libmachine: (cert-options-882510) DBG |   </ip>
	I0729 12:24:21.503344  165002 main.go:141] libmachine: (cert-options-882510) DBG |   
	I0729 12:24:21.503350  165002 main.go:141] libmachine: (cert-options-882510) DBG | </network>
	I0729 12:24:21.503360  165002 main.go:141] libmachine: (cert-options-882510) DBG | 
	I0729 12:24:21.509371  165002 main.go:141] libmachine: (cert-options-882510) DBG | trying to create private KVM network mk-cert-options-882510 192.168.72.0/24...
	I0729 12:24:21.587889  165002 main.go:141] libmachine: (cert-options-882510) DBG | private KVM network mk-cert-options-882510 192.168.72.0/24 created
	I0729 12:24:21.587917  165002 main.go:141] libmachine: (cert-options-882510) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510 ...
	I0729 12:24:21.587938  165002 main.go:141] libmachine: (cert-options-882510) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:24:21.588023  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.587943  165064 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:24:21.588316  165002 main.go:141] libmachine: (cert-options-882510) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 12:24:21.846386  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.846234  165064 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/id_rsa...
	I0729 12:24:22.058220  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:22.058090  165064 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/cert-options-882510.rawdisk...
	I0729 12:24:22.058234  165002 main.go:141] libmachine: (cert-options-882510) DBG | Writing magic tar header
	I0729 12:24:22.058259  165002 main.go:141] libmachine: (cert-options-882510) DBG | Writing SSH key tar header
	I0729 12:24:22.058271  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:22.058257  165064 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510 ...
	I0729 12:24:22.058371  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510
	I0729 12:24:22.058382  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 12:24:22.058421  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:24:22.058447  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 12:24:22.058456  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510 (perms=drwx------)
	I0729 12:24:22.058468  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 12:24:22.058477  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 12:24:22.058484  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 12:24:22.058489  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 12:24:22.058495  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 12:24:22.058499  165002 main.go:141] libmachine: (cert-options-882510) Creating domain...
	I0729 12:24:22.058559  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 12:24:22.058574  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins
	I0729 12:24:22.058580  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home
	I0729 12:24:22.058584  165002 main.go:141] libmachine: (cert-options-882510) DBG | Skipping /home - not owner
	I0729 12:24:22.059761  165002 main.go:141] libmachine: (cert-options-882510) define libvirt domain using xml: 
	I0729 12:24:22.059772  165002 main.go:141] libmachine: (cert-options-882510) <domain type='kvm'>
	I0729 12:24:22.059778  165002 main.go:141] libmachine: (cert-options-882510)   <name>cert-options-882510</name>
	I0729 12:24:22.059782  165002 main.go:141] libmachine: (cert-options-882510)   <memory unit='MiB'>2048</memory>
	I0729 12:24:22.059786  165002 main.go:141] libmachine: (cert-options-882510)   <vcpu>2</vcpu>
	I0729 12:24:22.059791  165002 main.go:141] libmachine: (cert-options-882510)   <features>
	I0729 12:24:22.059796  165002 main.go:141] libmachine: (cert-options-882510)     <acpi/>
	I0729 12:24:22.059799  165002 main.go:141] libmachine: (cert-options-882510)     <apic/>
	I0729 12:24:22.059803  165002 main.go:141] libmachine: (cert-options-882510)     <pae/>
	I0729 12:24:22.059808  165002 main.go:141] libmachine: (cert-options-882510)     
	I0729 12:24:22.059814  165002 main.go:141] libmachine: (cert-options-882510)   </features>
	I0729 12:24:22.059820  165002 main.go:141] libmachine: (cert-options-882510)   <cpu mode='host-passthrough'>
	I0729 12:24:22.059826  165002 main.go:141] libmachine: (cert-options-882510)   
	I0729 12:24:22.059831  165002 main.go:141] libmachine: (cert-options-882510)   </cpu>
	I0729 12:24:22.059837  165002 main.go:141] libmachine: (cert-options-882510)   <os>
	I0729 12:24:22.059843  165002 main.go:141] libmachine: (cert-options-882510)     <type>hvm</type>
	I0729 12:24:22.059880  165002 main.go:141] libmachine: (cert-options-882510)     <boot dev='cdrom'/>
	I0729 12:24:22.059891  165002 main.go:141] libmachine: (cert-options-882510)     <boot dev='hd'/>
	I0729 12:24:22.059898  165002 main.go:141] libmachine: (cert-options-882510)     <bootmenu enable='no'/>
	I0729 12:24:22.059902  165002 main.go:141] libmachine: (cert-options-882510)   </os>
	I0729 12:24:22.059906  165002 main.go:141] libmachine: (cert-options-882510)   <devices>
	I0729 12:24:22.059915  165002 main.go:141] libmachine: (cert-options-882510)     <disk type='file' device='cdrom'>
	I0729 12:24:22.059927  165002 main.go:141] libmachine: (cert-options-882510)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/boot2docker.iso'/>
	I0729 12:24:22.059943  165002 main.go:141] libmachine: (cert-options-882510)       <target dev='hdc' bus='scsi'/>
	I0729 12:24:22.059950  165002 main.go:141] libmachine: (cert-options-882510)       <readonly/>
	I0729 12:24:22.059957  165002 main.go:141] libmachine: (cert-options-882510)     </disk>
	I0729 12:24:22.059965  165002 main.go:141] libmachine: (cert-options-882510)     <disk type='file' device='disk'>
	I0729 12:24:22.059974  165002 main.go:141] libmachine: (cert-options-882510)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 12:24:22.059990  165002 main.go:141] libmachine: (cert-options-882510)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/cert-options-882510.rawdisk'/>
	I0729 12:24:22.059995  165002 main.go:141] libmachine: (cert-options-882510)       <target dev='hda' bus='virtio'/>
	I0729 12:24:22.060002  165002 main.go:141] libmachine: (cert-options-882510)     </disk>
	I0729 12:24:22.060008  165002 main.go:141] libmachine: (cert-options-882510)     <interface type='network'>
	I0729 12:24:22.060016  165002 main.go:141] libmachine: (cert-options-882510)       <source network='mk-cert-options-882510'/>
	I0729 12:24:22.060022  165002 main.go:141] libmachine: (cert-options-882510)       <model type='virtio'/>
	I0729 12:24:22.060029  165002 main.go:141] libmachine: (cert-options-882510)     </interface>
	I0729 12:24:22.060034  165002 main.go:141] libmachine: (cert-options-882510)     <interface type='network'>
	I0729 12:24:22.060042  165002 main.go:141] libmachine: (cert-options-882510)       <source network='default'/>
	I0729 12:24:22.060048  165002 main.go:141] libmachine: (cert-options-882510)       <model type='virtio'/>
	I0729 12:24:22.060057  165002 main.go:141] libmachine: (cert-options-882510)     </interface>
	I0729 12:24:22.060063  165002 main.go:141] libmachine: (cert-options-882510)     <serial type='pty'>
	I0729 12:24:22.060070  165002 main.go:141] libmachine: (cert-options-882510)       <target port='0'/>
	I0729 12:24:22.060075  165002 main.go:141] libmachine: (cert-options-882510)     </serial>
	I0729 12:24:22.060083  165002 main.go:141] libmachine: (cert-options-882510)     <console type='pty'>
	I0729 12:24:22.060089  165002 main.go:141] libmachine: (cert-options-882510)       <target type='serial' port='0'/>
	I0729 12:24:22.060096  165002 main.go:141] libmachine: (cert-options-882510)     </console>
	I0729 12:24:22.060102  165002 main.go:141] libmachine: (cert-options-882510)     <rng model='virtio'>
	I0729 12:24:22.060111  165002 main.go:141] libmachine: (cert-options-882510)       <backend model='random'>/dev/random</backend>
	I0729 12:24:22.060122  165002 main.go:141] libmachine: (cert-options-882510)     </rng>
	I0729 12:24:22.060129  165002 main.go:141] libmachine: (cert-options-882510)     
	I0729 12:24:22.060134  165002 main.go:141] libmachine: (cert-options-882510)     
	I0729 12:24:22.060139  165002 main.go:141] libmachine: (cert-options-882510)   </devices>
	I0729 12:24:22.060144  165002 main.go:141] libmachine: (cert-options-882510) </domain>
	I0729 12:24:22.060154  165002 main.go:141] libmachine: (cert-options-882510) 
	I0729 12:24:22.064449  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:e9:9f:fe in network default
	I0729 12:24:22.065091  165002 main.go:141] libmachine: (cert-options-882510) Ensuring networks are active...
	I0729 12:24:22.065105  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:22.065747  165002 main.go:141] libmachine: (cert-options-882510) Ensuring network default is active
	I0729 12:24:22.066167  165002 main.go:141] libmachine: (cert-options-882510) Ensuring network mk-cert-options-882510 is active
	I0729 12:24:22.066708  165002 main.go:141] libmachine: (cert-options-882510) Getting domain xml...
	I0729 12:24:22.067414  165002 main.go:141] libmachine: (cert-options-882510) Creating domain...
	I0729 12:24:23.454966  165002 main.go:141] libmachine: (cert-options-882510) Waiting to get IP...
	I0729 12:24:23.455917  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:23.456622  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:23.456778  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:23.456682  165064 retry.go:31] will retry after 203.823915ms: waiting for machine to come up
	I0729 12:24:18.968829  164458 pod_ready.go:102] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:20.970100  164458 pod_ready.go:102] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:22.969320  164458 pod_ready.go:92] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:22.969353  164458 pod_ready.go:81] duration metric: took 8.007561066s for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.969368  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.981401  164458 pod_ready.go:92] pod "kube-apiserver-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:22.981441  164458 pod_ready.go:81] duration metric: took 12.063901ms for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.981459  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.961650  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:22.965131  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:22.965706  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:22.965731  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:22.965985  164647 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 12:24:22.970785  164647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:24:22.985311  164647 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-714444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:24:22.985447  164647 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:24:22.985511  164647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:24:23.035105  164647 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 12:24:23.035198  164647 ssh_runner.go:195] Run: which lz4
	I0729 12:24:23.039553  164647 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 12:24:23.043683  164647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:24:23.043735  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 12:24:24.377689  164647 crio.go:462] duration metric: took 1.33817998s to copy over tarball
	I0729 12:24:24.377853  164647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:24:24.989663  164458 pod_ready.go:102] pod "kube-controller-manager-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:26.488527  164458 pod_ready.go:92] pod "kube-controller-manager-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.488553  164458 pod_ready.go:81] duration metric: took 3.507084365s for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.488567  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.494902  164458 pod_ready.go:92] pod "kube-proxy-g67j8" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.494929  164458 pod_ready.go:81] duration metric: took 6.353354ms for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.494942  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.500951  164458 pod_ready.go:92] pod "kube-scheduler-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.501003  164458 pod_ready.go:81] duration metric: took 6.051504ms for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.501014  164458 pod_ready.go:38] duration metric: took 13.057500758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:26.501043  164458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 12:24:26.512857  164458 ops.go:34] apiserver oom_adj: -16
	I0729 12:24:26.512885  164458 kubeadm.go:597] duration metric: took 20.606246834s to restartPrimaryControlPlane
	I0729 12:24:26.512897  164458 kubeadm.go:394] duration metric: took 20.72140841s to StartCluster
	I0729 12:24:26.512922  164458 settings.go:142] acquiring lock: {Name:mkb2a487c2f52476061a6d736b8e75563062eb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:26.513060  164458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:24:26.514350  164458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:26.514652  164458 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:24:26.514827  164458 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 12:24:26.515012  164458 config.go:182] Loaded profile config "pause-737279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:24:26.516901  164458 out.go:177] * Verifying Kubernetes components...
	I0729 12:24:26.516900  164458 out.go:177] * Enabled addons: 
	I0729 12:24:23.662262  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:23.662999  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:23.663024  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:23.662941  165064 retry.go:31] will retry after 286.478779ms: waiting for machine to come up
	I0729 12:24:23.951741  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:23.952358  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:23.952381  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:23.952331  165064 retry.go:31] will retry after 398.431963ms: waiting for machine to come up
	I0729 12:24:24.352062  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:24.352612  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:24.352624  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:24.352575  165064 retry.go:31] will retry after 405.37366ms: waiting for machine to come up
	I0729 12:24:24.759522  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:24.760205  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:24.760227  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:24.760155  165064 retry.go:31] will retry after 659.066485ms: waiting for machine to come up
	I0729 12:24:25.421249  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:25.421899  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:25.421920  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:25.421847  165064 retry.go:31] will retry after 910.229267ms: waiting for machine to come up
	I0729 12:24:26.334126  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:26.334757  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:26.334779  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:26.334701  165064 retry.go:31] will retry after 980.588569ms: waiting for machine to come up
	I0729 12:24:27.317198  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:27.317757  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:27.317776  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:27.317677  165064 retry.go:31] will retry after 1.267879012s: waiting for machine to come up
	I0729 12:24:28.587565  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:28.588148  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:28.588170  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:28.588093  165064 retry.go:31] will retry after 1.797079781s: waiting for machine to come up
	I0729 12:24:26.518097  164458 addons.go:510] duration metric: took 3.278237ms for enable addons: enabled=[]
	I0729 12:24:26.518151  164458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:24:26.676055  164458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:24:26.697882  164458 node_ready.go:35] waiting up to 6m0s for node "pause-737279" to be "Ready" ...
	I0729 12:24:26.701180  164458 node_ready.go:49] node "pause-737279" has status "Ready":"True"
	I0729 12:24:26.701205  164458 node_ready.go:38] duration metric: took 3.290812ms for node "pause-737279" to be "Ready" ...
	I0729 12:24:26.701217  164458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:26.706947  164458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.713091  164458 pod_ready.go:92] pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.713116  164458 pod_ready.go:81] duration metric: took 6.137538ms for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.713126  164458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.966464  164458 pod_ready.go:92] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.966496  164458 pod_ready.go:81] duration metric: took 253.362507ms for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.966512  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.367022  164458 pod_ready.go:92] pod "kube-apiserver-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:27.367048  164458 pod_ready.go:81] duration metric: took 400.527051ms for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.367063  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.767012  164458 pod_ready.go:92] pod "kube-controller-manager-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:27.767041  164458 pod_ready.go:81] duration metric: took 399.967961ms for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.767057  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.167121  164458 pod_ready.go:92] pod "kube-proxy-g67j8" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:28.167149  164458 pod_ready.go:81] duration metric: took 400.083968ms for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.167166  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.701849  164458 pod_ready.go:92] pod "kube-scheduler-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:28.701877  164458 pod_ready.go:81] duration metric: took 534.704054ms for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.701884  164458 pod_ready.go:38] duration metric: took 2.000656227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:28.701899  164458 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:24:28.701948  164458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:24:28.721492  164458 api_server.go:72] duration metric: took 2.20679799s to wait for apiserver process to appear ...
	I0729 12:24:28.721528  164458 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:24:28.721572  164458 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0729 12:24:28.727273  164458 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0729 12:24:28.728284  164458 api_server.go:141] control plane version: v1.30.3
	I0729 12:24:28.728309  164458 api_server.go:131] duration metric: took 6.773245ms to wait for apiserver health ...
	I0729 12:24:28.728316  164458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:24:28.769930  164458 system_pods.go:59] 6 kube-system pods found
	I0729 12:24:28.769981  164458 system_pods.go:61] "coredns-7db6d8ff4d-dth8w" [9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3] Running
	I0729 12:24:28.769988  164458 system_pods.go:61] "etcd-pause-737279" [a3e7c2fb-1721-4e04-8b6a-5d56c739d7c1] Running
	I0729 12:24:28.769994  164458 system_pods.go:61] "kube-apiserver-pause-737279" [002c0476-a619-48b5-9ccc-418b59526917] Running
	I0729 12:24:28.769999  164458 system_pods.go:61] "kube-controller-manager-pause-737279" [93857787-6d50-490c-a76a-4362bd3e64a0] Running
	I0729 12:24:28.770003  164458 system_pods.go:61] "kube-proxy-g67j8" [3b82113b-7e33-4acd-80a9-21b0a7b91d13] Running
	I0729 12:24:28.770008  164458 system_pods.go:61] "kube-scheduler-pause-737279" [e133ec5f-b9ac-4223-be31-8723de7bb5b6] Running
	I0729 12:24:28.770018  164458 system_pods.go:74] duration metric: took 41.693183ms to wait for pod list to return data ...
	I0729 12:24:28.770027  164458 default_sa.go:34] waiting for default service account to be created ...
	I0729 12:24:28.967403  164458 default_sa.go:45] found service account: "default"
	I0729 12:24:28.967436  164458 default_sa.go:55] duration metric: took 197.401776ms for default service account to be created ...
	I0729 12:24:28.967450  164458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 12:24:29.168903  164458 system_pods.go:86] 6 kube-system pods found
	I0729 12:24:29.168934  164458 system_pods.go:89] "coredns-7db6d8ff4d-dth8w" [9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3] Running
	I0729 12:24:29.168940  164458 system_pods.go:89] "etcd-pause-737279" [a3e7c2fb-1721-4e04-8b6a-5d56c739d7c1] Running
	I0729 12:24:29.168944  164458 system_pods.go:89] "kube-apiserver-pause-737279" [002c0476-a619-48b5-9ccc-418b59526917] Running
	I0729 12:24:29.168949  164458 system_pods.go:89] "kube-controller-manager-pause-737279" [93857787-6d50-490c-a76a-4362bd3e64a0] Running
	I0729 12:24:29.168953  164458 system_pods.go:89] "kube-proxy-g67j8" [3b82113b-7e33-4acd-80a9-21b0a7b91d13] Running
	I0729 12:24:29.168956  164458 system_pods.go:89] "kube-scheduler-pause-737279" [e133ec5f-b9ac-4223-be31-8723de7bb5b6] Running
	I0729 12:24:29.168987  164458 system_pods.go:126] duration metric: took 201.529279ms to wait for k8s-apps to be running ...
	I0729 12:24:29.168997  164458 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 12:24:29.169045  164458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:24:29.188861  164458 system_svc.go:56] duration metric: took 19.846205ms WaitForService to wait for kubelet
	I0729 12:24:29.188897  164458 kubeadm.go:582] duration metric: took 2.674210197s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:24:29.188925  164458 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:24:29.367267  164458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:24:29.367300  164458 node_conditions.go:123] node cpu capacity is 2
	I0729 12:24:29.367317  164458 node_conditions.go:105] duration metric: took 178.385301ms to run NodePressure ...
	I0729 12:24:29.367333  164458 start.go:241] waiting for startup goroutines ...
	I0729 12:24:29.367341  164458 start.go:246] waiting for cluster config update ...
	I0729 12:24:29.367348  164458 start.go:255] writing updated cluster config ...
	I0729 12:24:29.485025  164458 ssh_runner.go:195] Run: rm -f paused
	I0729 12:24:29.537003  164458 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 12:24:29.659545  164458 out.go:177] * Done! kubectl is now configured to use "pause-737279" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.404362258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255870404338688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41d7291f-41ce-4cab-b2af-5254e4776b6c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.404990534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84d96410-c2d1-4bc7-8fc3-510ac0013d60 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.405053571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84d96410-c2d1-4bc7-8fc3-510ac0013d60 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.405363502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84d96410-c2d1-4bc7-8fc3-510ac0013d60 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.448244269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4c308f3-0a6b-4d5a-a271-d84fb048713c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.448317679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4c308f3-0a6b-4d5a-a271-d84fb048713c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.449409897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80426e33-f4cf-47b8-81f9-84ffe89f4f1a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.450089587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255870450058785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80426e33-f4cf-47b8-81f9-84ffe89f4f1a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.450709948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f735278-8187-4c68-9a0c-d5090206a9f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.450781741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f735278-8187-4c68-9a0c-d5090206a9f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.451053868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f735278-8187-4c68-9a0c-d5090206a9f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.493914244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e150c29-b9e9-4a7b-a7c4-b762da63f5ee name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.494013362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e150c29-b9e9-4a7b-a7c4-b762da63f5ee name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.495041960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45cae412-79f3-42de-9a8a-e0c291d360dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.495430677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255870495406909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45cae412-79f3-42de-9a8a-e0c291d360dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.496229410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d20d58d9-7c23-4c45-b807-18fdf417b8a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.496287302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d20d58d9-7c23-4c45-b807-18fdf417b8a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.496547060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d20d58d9-7c23-4c45-b807-18fdf417b8a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.546107166Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1bf00c2c-6270-4338-8d78-d5c9a88164cb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.546252422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1bf00c2c-6270-4338-8d78-d5c9a88164cb name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.548296698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78772c59-e844-4445-9587-db44fe9c49b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.548949377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255870548908273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78772c59-e844-4445-9587-db44fe9c49b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.549583593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0eb6ca2e-afb8-4993-a50f-acf9c89c186b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.549735226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0eb6ca2e-afb8-4993-a50f-acf9c89c186b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:30 pause-737279 crio[3001]: time="2024-07-29 12:24:30.550104056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0eb6ca2e-afb8-4993-a50f-acf9c89c186b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4c3e67f08a32f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   2                   4232586f82068       coredns-7db6d8ff4d-dth8w
	655e3e84b3825       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   18 seconds ago      Running             kube-proxy                2                   397a3ff91ccbc       kube-proxy-g67j8
	f9bd3468de7b4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   22 seconds ago      Running             kube-controller-manager   2                   2a848534be433       kube-controller-manager-pause-737279
	68aeb8d057e91       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   22 seconds ago      Running             kube-scheduler            2                   e28884b783969       kube-scheduler-pause-737279
	4f654097c9b25       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago      Running             etcd                      2                   134397e8bf97c       etcd-pause-737279
	0451868e5e03d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   22 seconds ago      Running             kube-apiserver            2                   606796b77d121       kube-apiserver-pause-737279
	1678bac4a7262       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   27 seconds ago      Exited              coredns                   1                   b0d3631d99534       coredns-7db6d8ff4d-dth8w
	67f5680233239       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   c151a83bebd5b       etcd-pause-737279
	aafd1b2987099       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   27 seconds ago      Exited              kube-proxy                1                   78582a6fe3ff0       kube-proxy-g67j8
	4cc84da55d395       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   27 seconds ago      Exited              kube-controller-manager   1                   3498d24403dd0       kube-controller-manager-pause-737279
	b15bd5a323ff1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   27 seconds ago      Exited              kube-scheduler            1                   cd2be879ffb11       kube-scheduler-pause-737279
	0f72f27c65ddb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   28 seconds ago      Exited              kube-apiserver            1                   9b98c403bbd51       kube-apiserver-pause-737279
	
	
	==> coredns [1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5] <==
	
	
	==> coredns [4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35482 - 22485 "HINFO IN 6284925747029914226.1876960486442796283. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019325941s
	
	
	==> describe nodes <==
	Name:               pause-737279
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-737279
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=pause-737279
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-737279
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:24:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-737279
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d93ed0c3c4842eab9e127a82053f32c
	  System UUID:                9d93ed0c-3c48-42ea-b9e1-27a82053f32c
	  Boot ID:                    18bba4ef-6047-4833-91c3-04c72a396939
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-dth8w                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     65s
	  kube-system                 etcd-pause-737279                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                 kube-apiserver-pause-737279             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-pause-737279    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-g67j8                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-scheduler-pause-737279             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     79s                kubelet          Node pause-737279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  79s                kubelet          Node pause-737279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                kubelet          Node pause-737279 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeReady                78s                kubelet          Node pause-737279 status is now: NodeReady
	  Normal  RegisteredNode           66s                node-controller  Node pause-737279 event: Registered Node pause-737279 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-737279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-737279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-737279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-737279 event: Registered Node pause-737279 in Controller
	
	
	==> dmesg <==
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060797] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.164057] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.135371] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.262481] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.526431] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.059427] kauditd_printk_skb: 130 callbacks suppressed
	[Jul29 12:23] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.064478] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.310953] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.078637] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.881201] systemd-fstab-generator[1507]: Ignoring "noauto" option for root device
	[  +0.179402] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.035259] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 12:24] systemd-fstab-generator[2354]: Ignoring "noauto" option for root device
	[  +0.096196] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.091603] systemd-fstab-generator[2366]: Ignoring "noauto" option for root device
	[  +0.463260] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	[  +0.299157] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.614018] systemd-fstab-generator[2857]: Ignoring "noauto" option for root device
	[  +1.650287] systemd-fstab-generator[3472]: Ignoring "noauto" option for root device
	[  +2.697962] systemd-fstab-generator[3595]: Ignoring "noauto" option for root device
	[  +0.085833] kauditd_printk_skb: 244 callbacks suppressed
	[ +15.999198] kauditd_printk_skb: 50 callbacks suppressed
	[  +2.681981] systemd-fstab-generator[4039]: Ignoring "noauto" option for root device
	
	
	==> etcd [4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a] <==
	{"level":"info","ts":"2024-07-29T12:24:08.900993Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:24:08.901043Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:24:08.921373Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:24:08.92392Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"be6e2cf5fb13c","initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T12:24:08.926739Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T12:24:08.921856Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2024-07-29T12:24:08.92744Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2024-07-29T12:24:10.060534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:10.060773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:10.06085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgPreVoteResp from be6e2cf5fb13c at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:10.060929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.060968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgVoteResp from be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.061012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became leader at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.061046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be6e2cf5fb13c elected leader be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.06631Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"be6e2cf5fb13c","local-member-attributes":"{Name:pause-737279 ClientURLs:[https://192.168.39.61:2379]}","request-path":"/0/members/be6e2cf5fb13c/attributes","cluster-id":"855213fb0218a9ad","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:24:10.066433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:24:10.066924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:24:10.066962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:24:10.066551Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:24:10.069332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.61:2379"}
	{"level":"info","ts":"2024-07-29T12:24:10.070806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-29T12:24:28.688546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.843351ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12771242065836481876 > lease_revoke:<id:313c90fe72b59c1d>","response":"size:27"}
	{"level":"warn","ts":"2024-07-29T12:24:28.689038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.583373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-737279\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-29T12:24:28.688814Z","caller":"traceutil/trace.go:171","msg":"trace[1073697496] linearizableReadLoop","detail":"{readStateIndex:509; appliedIndex:508; }","duration":"134.369636ms","start":"2024-07-29T12:24:28.554428Z","end":"2024-07-29T12:24:28.688798Z","steps":["trace[1073697496] 'read index received'  (duration: 25.231µs)","trace[1073697496] 'applied index is now lower than readState.Index'  (duration: 134.342828ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:24:28.689112Z","caller":"traceutil/trace.go:171","msg":"trace[1031003297] range","detail":"{range_begin:/registry/minions/pause-737279; range_end:; response_count:1; response_revision:469; }","duration":"134.6959ms","start":"2024-07-29T12:24:28.554401Z","end":"2024-07-29T12:24:28.689097Z","steps":["trace[1031003297] 'agreement among raft nodes before linearized reading'  (duration: 134.549339ms)"],"step_count":1}
	
	
	==> etcd [67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff] <==
	{"level":"warn","ts":"2024-07-29T12:24:03.402636Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T12:24:03.402737Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.61:2380"]}
	{"level":"info","ts":"2024-07-29T12:24:03.402861Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:24:03.404968Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"]}
	{"level":"info","ts":"2024-07-29T12:24:03.405553Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-737279","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluste
r-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-29T12:24:03.444477Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"34.052861ms"}
	{"level":"info","ts":"2024-07-29T12:24:03.481335Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T12:24:03.490123Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","commit-index":413}
	{"level":"info","ts":"2024-07-29T12:24:03.492249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T12:24:03.492365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became follower at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:03.492396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft be6e2cf5fb13c [peers: [], term: 2, commit: 413, applied: 0, lastindex: 413, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T12:24:03.497428Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T12:24:03.515638Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":394}
	{"level":"info","ts":"2024-07-29T12:24:03.539514Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T12:24:03.553934Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"be6e2cf5fb13c","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:24:03.572403Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"be6e2cf5fb13c"}
	{"level":"info","ts":"2024-07-29T12:24:03.572525Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"be6e2cf5fb13c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T12:24:03.573067Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T12:24:03.573219Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:24:03.573256Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:24:03.573263Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:24:03.573478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c switched to configuration voters=(3350086559969596)"}
	{"level":"info","ts":"2024-07-29T12:24:03.573528Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","added-peer-id":"be6e2cf5fb13c","added-peer-peer-urls":["https://192.168.39.61:2380"]}
	{"level":"info","ts":"2024-07-29T12:24:03.573642Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:24:03.573665Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 12:24:31 up 1 min,  0 users,  load average: 1.05, 0.37, 0.13
	Linux pause-737279 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6] <==
	I0729 12:24:11.540814       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:24:11.542818       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:24:11.542920       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:24:11.542963       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:24:11.569914       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:24:11.570320       1 policy_source.go:224] refreshing policies
	I0729 12:24:11.591084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:24:11.591783       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:24:11.596817       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:24:11.596855       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:24:11.599572       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:24:11.599645       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:24:11.607969       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 12:24:11.616375       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 12:24:11.643975       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:24:11.648307       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:24:11.667450       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:24:12.501177       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 12:24:13.287551       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 12:24:13.309894       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 12:24:13.367271       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 12:24:13.402991       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 12:24:13.410480       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 12:24:23.855212       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 12:24:23.956232       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7] <==
	I0729 12:24:03.052109       1 options.go:221] external host was not specified, using 192.168.39.61
	I0729 12:24:03.058408       1 server.go:148] Version: v1.30.3
	I0729 12:24:03.058480       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6] <==
	
	
	==> kube-controller-manager [f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14] <==
	I0729 12:24:23.882407       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 12:24:23.886829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 12:24:23.892571       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 12:24:23.896451       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 12:24:23.905324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 12:24:23.905627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.512µs"
	I0729 12:24:23.908797       1 shared_informer.go:320] Caches are synced for service account
	I0729 12:24:23.908818       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 12:24:23.908925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 12:24:23.909008       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 12:24:23.911459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 12:24:23.912722       1 shared_informer.go:320] Caches are synced for taint
	I0729 12:24:23.912851       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 12:24:23.912958       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-737279"
	I0729 12:24:23.913022       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:24:23.916004       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 12:24:23.930435       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:24:23.937555       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 12:24:23.954883       1 shared_informer.go:320] Caches are synced for deployment
	I0729 12:24:23.992936       1 shared_informer.go:320] Caches are synced for disruption
	I0729 12:24:24.096222       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:24:24.105861       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:24:24.533489       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:24:24.536884       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:24:24.536935       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8] <==
	I0729 12:24:12.574324       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:24:12.599213       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.61"]
	I0729 12:24:12.649937       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:24:12.650073       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:24:12.650150       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:24:12.654337       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:24:12.654827       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:24:12.655206       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:24:12.656973       1 config.go:192] "Starting service config controller"
	I0729 12:24:12.657486       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:24:12.657571       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:24:12.657602       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:24:12.658143       1 config.go:319] "Starting node config controller"
	I0729 12:24:12.658180       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:24:12.758801       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:24:12.758866       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:24:12.758891       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a] <==
	
	
	==> kube-scheduler [68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd] <==
	I0729 12:24:09.456822       1 serving.go:380] Generated self-signed cert in-memory
	W0729 12:24:11.550102       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 12:24:11.550206       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:24:11.550221       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 12:24:11.550229       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 12:24:11.579270       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 12:24:11.579332       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:24:11.589635       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 12:24:11.602858       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 12:24:11.602969       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 12:24:11.603029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:24:11.703596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011] <==
	
	
	==> kubelet <==
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.222900    3602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9058e2cf19216cc93b8bfafdb7797839-usr-share-ca-certificates\") pod \"kube-apiserver-pause-737279\" (UID: \"9058e2cf19216cc93b8bfafdb7797839\") " pod="kube-system/kube-apiserver-pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.230189    3602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-737279?timeout=10s\": dial tcp 192.168.39.61:8443: connect: connection refused" interval="400ms"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.323273    3602 kubelet_node_status.go:73] "Attempting to register node" node="pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.325448    3602 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.487861    3602 scope.go:117] "RemoveContainer" containerID="67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.490053    3602 scope.go:117] "RemoveContainer" containerID="0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.491644    3602 scope.go:117] "RemoveContainer" containerID="4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.493476    3602 scope.go:117] "RemoveContainer" containerID="b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.631259    3602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-737279?timeout=10s\": dial tcp 192.168.39.61:8443: connect: connection refused" interval="800ms"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.728479    3602 kubelet_node_status.go:73] "Attempting to register node" node="pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.729366    3602 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-737279"
	Jul 29 12:24:09 pause-737279 kubelet[3602]: I0729 12:24:09.531419    3602 kubelet_node_status.go:73] "Attempting to register node" node="pause-737279"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.649613    3602 kubelet_node_status.go:112] "Node was previously registered" node="pause-737279"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.649744    3602 kubelet_node_status.go:76] "Successfully registered node" node="pause-737279"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.651321    3602 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.652618    3602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.001746    3602 apiserver.go:52] "Watching apiserver"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.006304    3602 topology_manager.go:215] "Topology Admit Handler" podUID="3b82113b-7e33-4acd-80a9-21b0a7b91d13" podNamespace="kube-system" podName="kube-proxy-g67j8"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.008414    3602 topology_manager.go:215] "Topology Admit Handler" podUID="9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dth8w"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.017070    3602 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.060145    3602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b82113b-7e33-4acd-80a9-21b0a7b91d13-lib-modules\") pod \"kube-proxy-g67j8\" (UID: \"3b82113b-7e33-4acd-80a9-21b0a7b91d13\") " pod="kube-system/kube-proxy-g67j8"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.060332    3602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b82113b-7e33-4acd-80a9-21b0a7b91d13-xtables-lock\") pod \"kube-proxy-g67j8\" (UID: \"3b82113b-7e33-4acd-80a9-21b0a7b91d13\") " pod="kube-system/kube-proxy-g67j8"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.311008    3602 scope.go:117] "RemoveContainer" containerID="aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.316979    3602 scope.go:117] "RemoveContainer" containerID="1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5"
	Jul 29 12:24:14 pause-737279 kubelet[3602]: I0729 12:24:14.665938    3602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-737279 -n pause-737279
helpers_test.go:261: (dbg) Run:  kubectl --context pause-737279 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-737279 -n pause-737279
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-737279 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-737279 logs -n 25: (1.611310061s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-185676           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 12:19 UTC | 29 Jul 24 12:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2      |                           |         |         |                     |                     |
	|         |  --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:20 UTC | 29 Jul 24 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p offline-crio-390530              | offline-crio-390530       | jenkins | v1.33.1 | 29 Jul 24 12:20 UTC | 29 Jul 24 12:20 UTC |
	| start   | -p running-upgrade-661564           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 12:20 UTC | 29 Jul 24 12:21 UTC |
	|         | --memory=2200 --vm-driver=kvm2      |                           |         |         |                     |                     |
	|         |  --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	| start   | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	|         | --no-kubernetes --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-185676 stop         | minikube                  | jenkins | v1.26.0 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	| start   | -p stopped-upgrade-185676           | stopped-upgrade-185676    | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:22 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p running-upgrade-661564           | running-upgrade-661564    | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:23 UTC |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-390849 sudo         | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:21 UTC |
	| start   | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:21 UTC | 29 Jul 24 12:22 UTC |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-390849 sudo         | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC |                     |
	|         | systemctl is-active --quiet         |                           |         |         |                     |                     |
	|         | service kubelet                     |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-390849              | NoKubernetes-390849       | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:22 UTC |
	| start   | -p pause-737279 --memory=2048       | pause-737279              | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:23 UTC |
	|         | --install-addons=false              |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2            |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-185676           | stopped-upgrade-185676    | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:22 UTC |
	| start   | -p cert-expiration-524248           | cert-expiration-524248    | jenkins | v1.33.1 | 29 Jul 24 12:22 UTC | 29 Jul 24 12:23 UTC |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-661564           | running-upgrade-661564    | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:23 UTC |
	| start   | -p force-systemd-flag-327451        | force-systemd-flag-327451 | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:24 UTC |
	|         | --memory=2048 --force-systemd       |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| start   | -p pause-737279                     | pause-737279              | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:24 UTC |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-714444        | kubernetes-upgrade-714444 | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC | 29 Jul 24 12:23 UTC |
	| start   | -p kubernetes-upgrade-714444        | kubernetes-upgrade-714444 | jenkins | v1.33.1 | 29 Jul 24 12:23 UTC |                     |
	|         | --memory=2200                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                           |         |         |                     |                     |
	|         | --alsologtostderr                   |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-327451 ssh cat   | force-systemd-flag-327451 | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC | 29 Jul 24 12:24 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf  |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-327451        | force-systemd-flag-327451 | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC | 29 Jul 24 12:24 UTC |
	| start   | -p cert-options-882510              | cert-options-882510       | jenkins | v1.33.1 | 29 Jul 24 12:24 UTC |                     |
	|         | --memory=2048                       |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1           |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15       |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost         |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com    |                           |         |         |                     |                     |
	|         | --apiserver-port=8555               |                           |         |         |                     |                     |
	|         | --driver=kvm2                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio            |                           |         |         |                     |                     |
	|---------|-------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 12:24:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 12:24:13.638835  165002 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:24:13.639093  165002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:24:13.639098  165002 out.go:304] Setting ErrFile to fd 2...
	I0729 12:24:13.639102  165002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:24:13.639319  165002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:24:13.639959  165002 out.go:298] Setting JSON to false
	I0729 12:24:13.641161  165002 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":7605,"bootTime":1722248249,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 12:24:13.641227  165002 start.go:139] virtualization: kvm guest
	I0729 12:24:13.643646  165002 out.go:177] * [cert-options-882510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 12:24:13.645241  165002 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 12:24:13.645327  165002 notify.go:220] Checking for updates...
	I0729 12:24:13.648070  165002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:24:13.649757  165002 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:24:13.651233  165002 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:24:13.652633  165002 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 12:24:13.654122  165002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:24:13.656063  165002 config.go:182] Loaded profile config "cert-expiration-524248": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:24:13.656182  165002 config.go:182] Loaded profile config "kubernetes-upgrade-714444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:24:13.656332  165002 config.go:182] Loaded profile config "pause-737279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:24:13.656417  165002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:24:13.696716  165002 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 12:24:13.698144  165002 start.go:297] selected driver: kvm2
	I0729 12:24:13.698155  165002 start.go:901] validating driver "kvm2" against <nil>
	I0729 12:24:13.698167  165002 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:24:13.699104  165002 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:24:13.699208  165002 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 12:24:13.718245  165002 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 12:24:13.718311  165002 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:24:13.718517  165002 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:24:13.718556  165002 cni.go:84] Creating CNI manager for ""
	I0729 12:24:13.718562  165002 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:24:13.718566  165002 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:24:13.718616  165002 start.go:340] cluster config:
	{Name:cert-options-882510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-882510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0729 12:24:13.718721  165002 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:24:13.721581  165002 out.go:177] * Starting "cert-options-882510" primary control-plane node in "cert-options-882510" cluster
	I0729 12:24:13.102678  164458 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 12:24:13.112911  164458 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 12:24:13.135921  164458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:24:13.146147  164458 system_pods.go:59] 6 kube-system pods found
	I0729 12:24:13.146207  164458 system_pods.go:61] "coredns-7db6d8ff4d-dth8w" [9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 12:24:13.146219  164458 system_pods.go:61] "etcd-pause-737279" [a3e7c2fb-1721-4e04-8b6a-5d56c739d7c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 12:24:13.146228  164458 system_pods.go:61] "kube-apiserver-pause-737279" [002c0476-a619-48b5-9ccc-418b59526917] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 12:24:13.146239  164458 system_pods.go:61] "kube-controller-manager-pause-737279" [93857787-6d50-490c-a76a-4362bd3e64a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 12:24:13.146252  164458 system_pods.go:61] "kube-proxy-g67j8" [3b82113b-7e33-4acd-80a9-21b0a7b91d13] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 12:24:13.146264  164458 system_pods.go:61] "kube-scheduler-pause-737279" [e133ec5f-b9ac-4223-be31-8723de7bb5b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 12:24:13.146276  164458 system_pods.go:74] duration metric: took 10.328029ms to wait for pod list to return data ...
	I0729 12:24:13.146289  164458 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:24:13.150095  164458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:24:13.150126  164458 node_conditions.go:123] node cpu capacity is 2
	I0729 12:24:13.150141  164458 node_conditions.go:105] duration metric: took 3.845005ms to run NodePressure ...
	I0729 12:24:13.150161  164458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:24:13.438182  164458 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 12:24:13.443458  164458 kubeadm.go:739] kubelet initialised
	I0729 12:24:13.443490  164458 kubeadm.go:740] duration metric: took 5.279813ms waiting for restarted kubelet to initialise ...
	I0729 12:24:13.443501  164458 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:13.452492  164458 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:11.772760  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:11.773317  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:24:11.773346  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:24:11.773276  164770 retry.go:31] will retry after 1.851462555s: waiting for machine to come up
	I0729 12:24:13.626482  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:13.628010  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:24:13.628038  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:24:13.627918  164770 retry.go:31] will retry after 3.254945292s: waiting for machine to come up
	I0729 12:24:13.722936  165002 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 12:24:13.722978  165002 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 12:24:13.722992  165002 cache.go:56] Caching tarball of preloaded images
	I0729 12:24:13.723085  165002 preload.go:172] Found /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 12:24:13.723093  165002 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 12:24:13.723230  165002 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/cert-options-882510/config.json ...
	I0729 12:24:13.723250  165002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/cert-options-882510/config.json: {Name:mk12081fdc9868d6c05d921d5427c0fc9fb60530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:13.723424  165002 start.go:360] acquireMachinesLock for cert-options-882510: {Name:mk5e457ce1a160493440916033ac0fe418b002eb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 12:24:14.961735  164458 pod_ready.go:92] pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:14.961770  164458 pod_ready.go:81] duration metric: took 1.509237101s for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:14.961784  164458 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:16.967888  164458 pod_ready.go:102] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:16.884852  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:16.885323  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | unable to find current IP address of domain kubernetes-upgrade-714444 in network mk-kubernetes-upgrade-714444
	I0729 12:24:16.885351  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | I0729 12:24:16.885283  164770 retry.go:31] will retry after 3.290405502s: waiting for machine to come up
	I0729 12:24:20.179885  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.180378  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has current primary IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.180392  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Found IP for machine: 192.168.50.36
	I0729 12:24:20.180408  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Reserving static IP address...
	I0729 12:24:20.180875  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-714444", mac: "52:54:00:92:96:14", ip: "192.168.50.36"} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.180899  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Reserved static IP address: 192.168.50.36
	I0729 12:24:20.180917  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | skip adding static IP to network mk-kubernetes-upgrade-714444 - found existing host DHCP lease matching {name: "kubernetes-upgrade-714444", mac: "52:54:00:92:96:14", ip: "192.168.50.36"}
	I0729 12:24:20.180930  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Getting to WaitForSSH function...
	I0729 12:24:20.180945  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Waiting for SSH to be available...
	I0729 12:24:20.183232  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.183600  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.183631  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.183808  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Using SSH client type: external
	I0729 12:24:20.183829  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | Using SSH private key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa (-rw-------)
	I0729 12:24:20.183869  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 12:24:20.183886  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | About to run SSH command:
	I0729 12:24:20.183897  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | exit 0
	I0729 12:24:20.309238  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | SSH cmd err, output: <nil>: 
	I0729 12:24:20.309637  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetConfigRaw
	I0729 12:24:20.310354  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:20.313154  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.313540  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.313575  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.313766  164647 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/config.json ...
	I0729 12:24:20.313958  164647 machine.go:94] provisionDockerMachine start ...
	I0729 12:24:20.313979  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:20.314210  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.316835  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.317225  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.317269  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.317454  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.317640  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.317810  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.317973  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.318163  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.318401  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.318415  164647 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 12:24:20.425370  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 12:24:20.425404  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:24:20.425687  164647 buildroot.go:166] provisioning hostname "kubernetes-upgrade-714444"
	I0729 12:24:20.425723  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:24:20.425924  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.428847  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.429366  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.429407  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.429551  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.429761  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.430058  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.430233  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.430426  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.430613  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.430625  164647 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-714444 && echo "kubernetes-upgrade-714444" | sudo tee /etc/hostname
	I0729 12:24:20.551539  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-714444
	
	I0729 12:24:20.551571  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.554290  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.554600  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.554633  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.554810  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.555053  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.555253  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.555381  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.555558  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.555782  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.555800  164647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-714444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-714444/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-714444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 12:24:20.669863  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 12:24:20.669896  164647 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19336-113730/.minikube CaCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19336-113730/.minikube}
	I0729 12:24:20.669959  164647 buildroot.go:174] setting up certificates
	I0729 12:24:20.669974  164647 provision.go:84] configureAuth start
	I0729 12:24:20.669992  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetMachineName
	I0729 12:24:20.670306  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:20.672860  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.673316  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.673350  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.673491  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.675993  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.676317  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.676344  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.676524  164647 provision.go:143] copyHostCerts
	I0729 12:24:20.676592  164647 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem, removing ...
	I0729 12:24:20.676606  164647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem
	I0729 12:24:20.676674  164647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/ca.pem (1082 bytes)
	I0729 12:24:20.676816  164647 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem, removing ...
	I0729 12:24:20.676826  164647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem
	I0729 12:24:20.676872  164647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/cert.pem (1123 bytes)
	I0729 12:24:20.676986  164647 exec_runner.go:144] found /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem, removing ...
	I0729 12:24:20.676997  164647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem
	I0729 12:24:20.677028  164647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19336-113730/.minikube/key.pem (1675 bytes)
	I0729 12:24:20.677113  164647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-714444 san=[127.0.0.1 192.168.50.36 kubernetes-upgrade-714444 localhost minikube]
	I0729 12:24:20.797468  164647 provision.go:177] copyRemoteCerts
	I0729 12:24:20.797554  164647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 12:24:20.797595  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.800697  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.801052  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.801084  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.801275  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.801461  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.801632  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.801766  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:20.888339  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 12:24:20.912284  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 12:24:20.937103  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 12:24:20.962569  164647 provision.go:87] duration metric: took 292.57341ms to configureAuth
	I0729 12:24:20.962604  164647 buildroot.go:189] setting minikube options for container-runtime
	I0729 12:24:20.962848  164647 config.go:182] Loaded profile config "kubernetes-upgrade-714444": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 12:24:20.962949  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:20.966023  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.966463  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:20.966499  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:20.966717  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:20.966958  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.967152  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:20.967318  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:20.967550  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:20.967767  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:20.967791  164647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 12:24:21.470172  165002 start.go:364] duration metric: took 7.746678226s to acquireMachinesLock for "cert-options-882510"
	I0729 12:24:21.470224  165002 start.go:93] Provisioning new machine with config: &{Name:cert-options-882510 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-882510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:24:21.470332  165002 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 12:24:21.231962  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 12:24:21.231993  164647 machine.go:97] duration metric: took 918.021669ms to provisionDockerMachine
	I0729 12:24:21.232006  164647 start.go:293] postStartSetup for "kubernetes-upgrade-714444" (driver="kvm2")
	I0729 12:24:21.232030  164647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 12:24:21.232062  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.232365  164647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 12:24:21.232392  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.235036  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.235363  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.235395  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.235563  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.235764  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.235964  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.236153  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:21.320619  164647 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 12:24:21.324846  164647 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 12:24:21.324886  164647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/addons for local assets ...
	I0729 12:24:21.325000  164647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19336-113730/.minikube/files for local assets ...
	I0729 12:24:21.325088  164647 filesync.go:149] local asset: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem -> 1209632.pem in /etc/ssl/certs
	I0729 12:24:21.325217  164647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 12:24:21.335282  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:24:21.360735  164647 start.go:296] duration metric: took 128.695915ms for postStartSetup
	I0729 12:24:21.360789  164647 fix.go:56] duration metric: took 19.86692126s for fixHost
	I0729 12:24:21.360819  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.363722  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.364165  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.364209  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.364319  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.364584  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.364776  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.364927  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.365239  164647 main.go:141] libmachine: Using SSH client type: native
	I0729 12:24:21.365420  164647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0729 12:24:21.365435  164647 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 12:24:21.469928  164647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722255861.440086074
	
	I0729 12:24:21.469965  164647 fix.go:216] guest clock: 1722255861.440086074
	I0729 12:24:21.469976  164647 fix.go:229] Guest: 2024-07-29 12:24:21.440086074 +0000 UTC Remote: 2024-07-29 12:24:21.360794225 +0000 UTC m=+30.400862407 (delta=79.291849ms)
	I0729 12:24:21.470010  164647 fix.go:200] guest clock delta is within tolerance: 79.291849ms
	I0729 12:24:21.470033  164647 start.go:83] releasing machines lock for "kubernetes-upgrade-714444", held for 19.976194373s
	I0729 12:24:21.470085  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.470399  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:21.473356  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.473740  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.473773  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.473863  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.474478  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.474710  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .DriverName
	I0729 12:24:21.474802  164647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 12:24:21.474843  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.474965  164647 ssh_runner.go:195] Run: cat /version.json
	I0729 12:24:21.474991  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHHostname
	I0729 12:24:21.477561  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.477843  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.477990  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.478018  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.478185  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.478316  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:21.478347  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:21.478394  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.478505  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHPort
	I0729 12:24:21.478595  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.478649  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHKeyPath
	I0729 12:24:21.478725  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:21.478818  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetSSHUsername
	I0729 12:24:21.478952  164647 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/kubernetes-upgrade-714444/id_rsa Username:docker}
	I0729 12:24:21.587800  164647 ssh_runner.go:195] Run: systemctl --version
	I0729 12:24:21.594581  164647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 12:24:21.750970  164647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 12:24:21.757026  164647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 12:24:21.757104  164647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 12:24:21.773167  164647 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 12:24:21.773200  164647 start.go:495] detecting cgroup driver to use...
	I0729 12:24:21.773277  164647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 12:24:21.790322  164647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 12:24:21.804998  164647 docker.go:217] disabling cri-docker service (if available) ...
	I0729 12:24:21.805076  164647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 12:24:21.819370  164647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 12:24:21.833788  164647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 12:24:21.959664  164647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 12:24:22.112585  164647 docker.go:233] disabling docker service ...
	I0729 12:24:22.112686  164647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 12:24:22.127012  164647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 12:24:22.140642  164647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 12:24:22.287378  164647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 12:24:22.397752  164647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 12:24:22.414110  164647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 12:24:22.433400  164647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 12:24:22.433465  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.444261  164647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 12:24:22.444334  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.455265  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.467714  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.478609  164647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 12:24:22.488890  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.499387  164647 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.517026  164647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 12:24:22.527767  164647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 12:24:22.537732  164647 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 12:24:22.537796  164647 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 12:24:22.551819  164647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 12:24:22.562002  164647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:24:22.687562  164647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 12:24:22.841785  164647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 12:24:22.841852  164647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 12:24:22.847926  164647 start.go:563] Will wait 60s for crictl version
	I0729 12:24:22.848007  164647 ssh_runner.go:195] Run: which crictl
	I0729 12:24:22.852818  164647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 12:24:22.894513  164647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 12:24:22.894626  164647 ssh_runner.go:195] Run: crio --version
	I0729 12:24:22.927644  164647 ssh_runner.go:195] Run: crio --version
	I0729 12:24:22.960302  164647 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 12:24:21.472280  165002 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 12:24:21.472494  165002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:24:21.472526  165002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:24:21.493620  165002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I0729 12:24:21.494070  165002 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:24:21.494693  165002 main.go:141] libmachine: Using API Version  1
	I0729 12:24:21.494710  165002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:24:21.495109  165002 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:24:21.495358  165002 main.go:141] libmachine: (cert-options-882510) Calling .GetMachineName
	I0729 12:24:21.495517  165002 main.go:141] libmachine: (cert-options-882510) Calling .DriverName
	I0729 12:24:21.495674  165002 start.go:159] libmachine.API.Create for "cert-options-882510" (driver="kvm2")
	I0729 12:24:21.495708  165002 client.go:168] LocalClient.Create starting
	I0729 12:24:21.495742  165002 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem
	I0729 12:24:21.495780  165002 main.go:141] libmachine: Decoding PEM data...
	I0729 12:24:21.495796  165002 main.go:141] libmachine: Parsing certificate...
	I0729 12:24:21.495864  165002 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem
	I0729 12:24:21.495882  165002 main.go:141] libmachine: Decoding PEM data...
	I0729 12:24:21.495892  165002 main.go:141] libmachine: Parsing certificate...
	I0729 12:24:21.495910  165002 main.go:141] libmachine: Running pre-create checks...
	I0729 12:24:21.495919  165002 main.go:141] libmachine: (cert-options-882510) Calling .PreCreateCheck
	I0729 12:24:21.496292  165002 main.go:141] libmachine: (cert-options-882510) Calling .GetConfigRaw
	I0729 12:24:21.496793  165002 main.go:141] libmachine: Creating machine...
	I0729 12:24:21.496804  165002 main.go:141] libmachine: (cert-options-882510) Calling .Create
	I0729 12:24:21.497044  165002 main.go:141] libmachine: (cert-options-882510) Creating KVM machine...
	I0729 12:24:21.498474  165002 main.go:141] libmachine: (cert-options-882510) DBG | found existing default KVM network
	I0729 12:24:21.500062  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.499887  165064 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e1:cd:0c} reservation:<nil>}
	I0729 12:24:21.501029  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.500897  165064 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6d:9b:ef} reservation:<nil>}
	I0729 12:24:21.502045  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.501957  165064 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:ef:de} reservation:<nil>}
	I0729 12:24:21.503189  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.503089  165064 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289d50}
	I0729 12:24:21.503224  165002 main.go:141] libmachine: (cert-options-882510) DBG | created network xml: 
	I0729 12:24:21.503234  165002 main.go:141] libmachine: (cert-options-882510) DBG | <network>
	I0729 12:24:21.503243  165002 main.go:141] libmachine: (cert-options-882510) DBG |   <name>mk-cert-options-882510</name>
	I0729 12:24:21.503250  165002 main.go:141] libmachine: (cert-options-882510) DBG |   <dns enable='no'/>
	I0729 12:24:21.503257  165002 main.go:141] libmachine: (cert-options-882510) DBG |   
	I0729 12:24:21.503265  165002 main.go:141] libmachine: (cert-options-882510) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0729 12:24:21.503295  165002 main.go:141] libmachine: (cert-options-882510) DBG |     <dhcp>
	I0729 12:24:21.503302  165002 main.go:141] libmachine: (cert-options-882510) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0729 12:24:21.503325  165002 main.go:141] libmachine: (cert-options-882510) DBG |     </dhcp>
	I0729 12:24:21.503335  165002 main.go:141] libmachine: (cert-options-882510) DBG |   </ip>
	I0729 12:24:21.503344  165002 main.go:141] libmachine: (cert-options-882510) DBG |   
	I0729 12:24:21.503350  165002 main.go:141] libmachine: (cert-options-882510) DBG | </network>
	I0729 12:24:21.503360  165002 main.go:141] libmachine: (cert-options-882510) DBG | 
	I0729 12:24:21.509371  165002 main.go:141] libmachine: (cert-options-882510) DBG | trying to create private KVM network mk-cert-options-882510 192.168.72.0/24...
	I0729 12:24:21.587889  165002 main.go:141] libmachine: (cert-options-882510) DBG | private KVM network mk-cert-options-882510 192.168.72.0/24 created
	I0729 12:24:21.587917  165002 main.go:141] libmachine: (cert-options-882510) Setting up store path in /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510 ...
	I0729 12:24:21.587938  165002 main.go:141] libmachine: (cert-options-882510) Building disk image from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 12:24:21.588023  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.587943  165064 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:24:21.588316  165002 main.go:141] libmachine: (cert-options-882510) Downloading /home/jenkins/minikube-integration/19336-113730/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 12:24:21.846386  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:21.846234  165064 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/id_rsa...
	I0729 12:24:22.058220  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:22.058090  165064 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/cert-options-882510.rawdisk...
	I0729 12:24:22.058234  165002 main.go:141] libmachine: (cert-options-882510) DBG | Writing magic tar header
	I0729 12:24:22.058259  165002 main.go:141] libmachine: (cert-options-882510) DBG | Writing SSH key tar header
	I0729 12:24:22.058271  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:22.058257  165064 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510 ...
	I0729 12:24:22.058371  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510
	I0729 12:24:22.058382  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube/machines
	I0729 12:24:22.058421  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 12:24:22.058447  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19336-113730
	I0729 12:24:22.058456  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510 (perms=drwx------)
	I0729 12:24:22.058468  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube/machines (perms=drwxr-xr-x)
	I0729 12:24:22.058477  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730/.minikube (perms=drwxr-xr-x)
	I0729 12:24:22.058484  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration/19336-113730 (perms=drwxrwxr-x)
	I0729 12:24:22.058489  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 12:24:22.058495  165002 main.go:141] libmachine: (cert-options-882510) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 12:24:22.058499  165002 main.go:141] libmachine: (cert-options-882510) Creating domain...
	I0729 12:24:22.058559  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 12:24:22.058574  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home/jenkins
	I0729 12:24:22.058580  165002 main.go:141] libmachine: (cert-options-882510) DBG | Checking permissions on dir: /home
	I0729 12:24:22.058584  165002 main.go:141] libmachine: (cert-options-882510) DBG | Skipping /home - not owner
	I0729 12:24:22.059761  165002 main.go:141] libmachine: (cert-options-882510) define libvirt domain using xml: 
	I0729 12:24:22.059772  165002 main.go:141] libmachine: (cert-options-882510) <domain type='kvm'>
	I0729 12:24:22.059778  165002 main.go:141] libmachine: (cert-options-882510)   <name>cert-options-882510</name>
	I0729 12:24:22.059782  165002 main.go:141] libmachine: (cert-options-882510)   <memory unit='MiB'>2048</memory>
	I0729 12:24:22.059786  165002 main.go:141] libmachine: (cert-options-882510)   <vcpu>2</vcpu>
	I0729 12:24:22.059791  165002 main.go:141] libmachine: (cert-options-882510)   <features>
	I0729 12:24:22.059796  165002 main.go:141] libmachine: (cert-options-882510)     <acpi/>
	I0729 12:24:22.059799  165002 main.go:141] libmachine: (cert-options-882510)     <apic/>
	I0729 12:24:22.059803  165002 main.go:141] libmachine: (cert-options-882510)     <pae/>
	I0729 12:24:22.059808  165002 main.go:141] libmachine: (cert-options-882510)     
	I0729 12:24:22.059814  165002 main.go:141] libmachine: (cert-options-882510)   </features>
	I0729 12:24:22.059820  165002 main.go:141] libmachine: (cert-options-882510)   <cpu mode='host-passthrough'>
	I0729 12:24:22.059826  165002 main.go:141] libmachine: (cert-options-882510)   
	I0729 12:24:22.059831  165002 main.go:141] libmachine: (cert-options-882510)   </cpu>
	I0729 12:24:22.059837  165002 main.go:141] libmachine: (cert-options-882510)   <os>
	I0729 12:24:22.059843  165002 main.go:141] libmachine: (cert-options-882510)     <type>hvm</type>
	I0729 12:24:22.059880  165002 main.go:141] libmachine: (cert-options-882510)     <boot dev='cdrom'/>
	I0729 12:24:22.059891  165002 main.go:141] libmachine: (cert-options-882510)     <boot dev='hd'/>
	I0729 12:24:22.059898  165002 main.go:141] libmachine: (cert-options-882510)     <bootmenu enable='no'/>
	I0729 12:24:22.059902  165002 main.go:141] libmachine: (cert-options-882510)   </os>
	I0729 12:24:22.059906  165002 main.go:141] libmachine: (cert-options-882510)   <devices>
	I0729 12:24:22.059915  165002 main.go:141] libmachine: (cert-options-882510)     <disk type='file' device='cdrom'>
	I0729 12:24:22.059927  165002 main.go:141] libmachine: (cert-options-882510)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/boot2docker.iso'/>
	I0729 12:24:22.059943  165002 main.go:141] libmachine: (cert-options-882510)       <target dev='hdc' bus='scsi'/>
	I0729 12:24:22.059950  165002 main.go:141] libmachine: (cert-options-882510)       <readonly/>
	I0729 12:24:22.059957  165002 main.go:141] libmachine: (cert-options-882510)     </disk>
	I0729 12:24:22.059965  165002 main.go:141] libmachine: (cert-options-882510)     <disk type='file' device='disk'>
	I0729 12:24:22.059974  165002 main.go:141] libmachine: (cert-options-882510)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 12:24:22.059990  165002 main.go:141] libmachine: (cert-options-882510)       <source file='/home/jenkins/minikube-integration/19336-113730/.minikube/machines/cert-options-882510/cert-options-882510.rawdisk'/>
	I0729 12:24:22.059995  165002 main.go:141] libmachine: (cert-options-882510)       <target dev='hda' bus='virtio'/>
	I0729 12:24:22.060002  165002 main.go:141] libmachine: (cert-options-882510)     </disk>
	I0729 12:24:22.060008  165002 main.go:141] libmachine: (cert-options-882510)     <interface type='network'>
	I0729 12:24:22.060016  165002 main.go:141] libmachine: (cert-options-882510)       <source network='mk-cert-options-882510'/>
	I0729 12:24:22.060022  165002 main.go:141] libmachine: (cert-options-882510)       <model type='virtio'/>
	I0729 12:24:22.060029  165002 main.go:141] libmachine: (cert-options-882510)     </interface>
	I0729 12:24:22.060034  165002 main.go:141] libmachine: (cert-options-882510)     <interface type='network'>
	I0729 12:24:22.060042  165002 main.go:141] libmachine: (cert-options-882510)       <source network='default'/>
	I0729 12:24:22.060048  165002 main.go:141] libmachine: (cert-options-882510)       <model type='virtio'/>
	I0729 12:24:22.060057  165002 main.go:141] libmachine: (cert-options-882510)     </interface>
	I0729 12:24:22.060063  165002 main.go:141] libmachine: (cert-options-882510)     <serial type='pty'>
	I0729 12:24:22.060070  165002 main.go:141] libmachine: (cert-options-882510)       <target port='0'/>
	I0729 12:24:22.060075  165002 main.go:141] libmachine: (cert-options-882510)     </serial>
	I0729 12:24:22.060083  165002 main.go:141] libmachine: (cert-options-882510)     <console type='pty'>
	I0729 12:24:22.060089  165002 main.go:141] libmachine: (cert-options-882510)       <target type='serial' port='0'/>
	I0729 12:24:22.060096  165002 main.go:141] libmachine: (cert-options-882510)     </console>
	I0729 12:24:22.060102  165002 main.go:141] libmachine: (cert-options-882510)     <rng model='virtio'>
	I0729 12:24:22.060111  165002 main.go:141] libmachine: (cert-options-882510)       <backend model='random'>/dev/random</backend>
	I0729 12:24:22.060122  165002 main.go:141] libmachine: (cert-options-882510)     </rng>
	I0729 12:24:22.060129  165002 main.go:141] libmachine: (cert-options-882510)     
	I0729 12:24:22.060134  165002 main.go:141] libmachine: (cert-options-882510)     
	I0729 12:24:22.060139  165002 main.go:141] libmachine: (cert-options-882510)   </devices>
	I0729 12:24:22.060144  165002 main.go:141] libmachine: (cert-options-882510) </domain>
	I0729 12:24:22.060154  165002 main.go:141] libmachine: (cert-options-882510) 
	I0729 12:24:22.064449  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:e9:9f:fe in network default
	I0729 12:24:22.065091  165002 main.go:141] libmachine: (cert-options-882510) Ensuring networks are active...
	I0729 12:24:22.065105  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:22.065747  165002 main.go:141] libmachine: (cert-options-882510) Ensuring network default is active
	I0729 12:24:22.066167  165002 main.go:141] libmachine: (cert-options-882510) Ensuring network mk-cert-options-882510 is active
	I0729 12:24:22.066708  165002 main.go:141] libmachine: (cert-options-882510) Getting domain xml...
	I0729 12:24:22.067414  165002 main.go:141] libmachine: (cert-options-882510) Creating domain...
	I0729 12:24:23.454966  165002 main.go:141] libmachine: (cert-options-882510) Waiting to get IP...
	I0729 12:24:23.455917  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:23.456622  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:23.456778  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:23.456682  165064 retry.go:31] will retry after 203.823915ms: waiting for machine to come up
	I0729 12:24:18.968829  164458 pod_ready.go:102] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:20.970100  164458 pod_ready.go:102] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:22.969320  164458 pod_ready.go:92] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:22.969353  164458 pod_ready.go:81] duration metric: took 8.007561066s for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.969368  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.981401  164458 pod_ready.go:92] pod "kube-apiserver-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:22.981441  164458 pod_ready.go:81] duration metric: took 12.063901ms for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.981459  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:22.961650  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) Calling .GetIP
	I0729 12:24:22.965131  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:22.965706  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:96:14", ip: ""} in network mk-kubernetes-upgrade-714444: {Iface:virbr2 ExpiryTime:2024-07-29 13:24:13 +0000 UTC Type:0 Mac:52:54:00:92:96:14 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-714444 Clientid:01:52:54:00:92:96:14}
	I0729 12:24:22.965731  164647 main.go:141] libmachine: (kubernetes-upgrade-714444) DBG | domain kubernetes-upgrade-714444 has defined IP address 192.168.50.36 and MAC address 52:54:00:92:96:14 in network mk-kubernetes-upgrade-714444
	I0729 12:24:22.965985  164647 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 12:24:22.970785  164647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:24:22.985311  164647 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-714444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 12:24:22.985447  164647 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 12:24:22.985511  164647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:24:23.035105  164647 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 12:24:23.035198  164647 ssh_runner.go:195] Run: which lz4
	I0729 12:24:23.039553  164647 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 12:24:23.043683  164647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 12:24:23.043735  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 12:24:24.377689  164647 crio.go:462] duration metric: took 1.33817998s to copy over tarball
	I0729 12:24:24.377853  164647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 12:24:24.989663  164458 pod_ready.go:102] pod "kube-controller-manager-pause-737279" in "kube-system" namespace has status "Ready":"False"
	I0729 12:24:26.488527  164458 pod_ready.go:92] pod "kube-controller-manager-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.488553  164458 pod_ready.go:81] duration metric: took 3.507084365s for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.488567  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.494902  164458 pod_ready.go:92] pod "kube-proxy-g67j8" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.494929  164458 pod_ready.go:81] duration metric: took 6.353354ms for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.494942  164458 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.500951  164458 pod_ready.go:92] pod "kube-scheduler-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.501003  164458 pod_ready.go:81] duration metric: took 6.051504ms for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.501014  164458 pod_ready.go:38] duration metric: took 13.057500758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:26.501043  164458 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 12:24:26.512857  164458 ops.go:34] apiserver oom_adj: -16
	I0729 12:24:26.512885  164458 kubeadm.go:597] duration metric: took 20.606246834s to restartPrimaryControlPlane
	I0729 12:24:26.512897  164458 kubeadm.go:394] duration metric: took 20.72140841s to StartCluster
	I0729 12:24:26.512922  164458 settings.go:142] acquiring lock: {Name:mkb2a487c2f52476061a6d736b8e75563062eb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:26.513060  164458 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:24:26.514350  164458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:26.514652  164458 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.61 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 12:24:26.514827  164458 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 12:24:26.515012  164458 config.go:182] Loaded profile config "pause-737279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:24:26.516901  164458 out.go:177] * Verifying Kubernetes components...
	I0729 12:24:26.516900  164458 out.go:177] * Enabled addons: 
	I0729 12:24:23.662262  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:23.662999  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:23.663024  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:23.662941  165064 retry.go:31] will retry after 286.478779ms: waiting for machine to come up
	I0729 12:24:23.951741  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:23.952358  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:23.952381  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:23.952331  165064 retry.go:31] will retry after 398.431963ms: waiting for machine to come up
	I0729 12:24:24.352062  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:24.352612  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:24.352624  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:24.352575  165064 retry.go:31] will retry after 405.37366ms: waiting for machine to come up
	I0729 12:24:24.759522  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:24.760205  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:24.760227  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:24.760155  165064 retry.go:31] will retry after 659.066485ms: waiting for machine to come up
	I0729 12:24:25.421249  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:25.421899  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:25.421920  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:25.421847  165064 retry.go:31] will retry after 910.229267ms: waiting for machine to come up
	I0729 12:24:26.334126  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:26.334757  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:26.334779  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:26.334701  165064 retry.go:31] will retry after 980.588569ms: waiting for machine to come up
	I0729 12:24:27.317198  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:27.317757  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:27.317776  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:27.317677  165064 retry.go:31] will retry after 1.267879012s: waiting for machine to come up
	I0729 12:24:28.587565  165002 main.go:141] libmachine: (cert-options-882510) DBG | domain cert-options-882510 has defined MAC address 52:54:00:f8:55:74 in network mk-cert-options-882510
	I0729 12:24:28.588148  165002 main.go:141] libmachine: (cert-options-882510) DBG | unable to find current IP address of domain cert-options-882510 in network mk-cert-options-882510
	I0729 12:24:28.588170  165002 main.go:141] libmachine: (cert-options-882510) DBG | I0729 12:24:28.588093  165064 retry.go:31] will retry after 1.797079781s: waiting for machine to come up
	I0729 12:24:26.518097  164458 addons.go:510] duration metric: took 3.278237ms for enable addons: enabled=[]
	I0729 12:24:26.518151  164458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:24:26.676055  164458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:24:26.697882  164458 node_ready.go:35] waiting up to 6m0s for node "pause-737279" to be "Ready" ...
	I0729 12:24:26.701180  164458 node_ready.go:49] node "pause-737279" has status "Ready":"True"
	I0729 12:24:26.701205  164458 node_ready.go:38] duration metric: took 3.290812ms for node "pause-737279" to be "Ready" ...
	I0729 12:24:26.701217  164458 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:26.706947  164458 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.713091  164458 pod_ready.go:92] pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.713116  164458 pod_ready.go:81] duration metric: took 6.137538ms for pod "coredns-7db6d8ff4d-dth8w" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.713126  164458 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.966464  164458 pod_ready.go:92] pod "etcd-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:26.966496  164458 pod_ready.go:81] duration metric: took 253.362507ms for pod "etcd-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:26.966512  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.367022  164458 pod_ready.go:92] pod "kube-apiserver-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:27.367048  164458 pod_ready.go:81] duration metric: took 400.527051ms for pod "kube-apiserver-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.367063  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.767012  164458 pod_ready.go:92] pod "kube-controller-manager-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:27.767041  164458 pod_ready.go:81] duration metric: took 399.967961ms for pod "kube-controller-manager-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:27.767057  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.167121  164458 pod_ready.go:92] pod "kube-proxy-g67j8" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:28.167149  164458 pod_ready.go:81] duration metric: took 400.083968ms for pod "kube-proxy-g67j8" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.167166  164458 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.701849  164458 pod_ready.go:92] pod "kube-scheduler-pause-737279" in "kube-system" namespace has status "Ready":"True"
	I0729 12:24:28.701877  164458 pod_ready.go:81] duration metric: took 534.704054ms for pod "kube-scheduler-pause-737279" in "kube-system" namespace to be "Ready" ...
	I0729 12:24:28.701884  164458 pod_ready.go:38] duration metric: took 2.000656227s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 12:24:28.701899  164458 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:24:28.701948  164458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:24:28.721492  164458 api_server.go:72] duration metric: took 2.20679799s to wait for apiserver process to appear ...
	I0729 12:24:28.721528  164458 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:24:28.721572  164458 api_server.go:253] Checking apiserver healthz at https://192.168.39.61:8443/healthz ...
	I0729 12:24:28.727273  164458 api_server.go:279] https://192.168.39.61:8443/healthz returned 200:
	ok
	I0729 12:24:28.728284  164458 api_server.go:141] control plane version: v1.30.3
	I0729 12:24:28.728309  164458 api_server.go:131] duration metric: took 6.773245ms to wait for apiserver health ...
	I0729 12:24:28.728316  164458 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 12:24:28.769930  164458 system_pods.go:59] 6 kube-system pods found
	I0729 12:24:28.769981  164458 system_pods.go:61] "coredns-7db6d8ff4d-dth8w" [9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3] Running
	I0729 12:24:28.769988  164458 system_pods.go:61] "etcd-pause-737279" [a3e7c2fb-1721-4e04-8b6a-5d56c739d7c1] Running
	I0729 12:24:28.769994  164458 system_pods.go:61] "kube-apiserver-pause-737279" [002c0476-a619-48b5-9ccc-418b59526917] Running
	I0729 12:24:28.769999  164458 system_pods.go:61] "kube-controller-manager-pause-737279" [93857787-6d50-490c-a76a-4362bd3e64a0] Running
	I0729 12:24:28.770003  164458 system_pods.go:61] "kube-proxy-g67j8" [3b82113b-7e33-4acd-80a9-21b0a7b91d13] Running
	I0729 12:24:28.770008  164458 system_pods.go:61] "kube-scheduler-pause-737279" [e133ec5f-b9ac-4223-be31-8723de7bb5b6] Running
	I0729 12:24:28.770018  164458 system_pods.go:74] duration metric: took 41.693183ms to wait for pod list to return data ...
	I0729 12:24:28.770027  164458 default_sa.go:34] waiting for default service account to be created ...
	I0729 12:24:28.967403  164458 default_sa.go:45] found service account: "default"
	I0729 12:24:28.967436  164458 default_sa.go:55] duration metric: took 197.401776ms for default service account to be created ...
	I0729 12:24:28.967450  164458 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 12:24:29.168903  164458 system_pods.go:86] 6 kube-system pods found
	I0729 12:24:29.168934  164458 system_pods.go:89] "coredns-7db6d8ff4d-dth8w" [9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3] Running
	I0729 12:24:29.168940  164458 system_pods.go:89] "etcd-pause-737279" [a3e7c2fb-1721-4e04-8b6a-5d56c739d7c1] Running
	I0729 12:24:29.168944  164458 system_pods.go:89] "kube-apiserver-pause-737279" [002c0476-a619-48b5-9ccc-418b59526917] Running
	I0729 12:24:29.168949  164458 system_pods.go:89] "kube-controller-manager-pause-737279" [93857787-6d50-490c-a76a-4362bd3e64a0] Running
	I0729 12:24:29.168953  164458 system_pods.go:89] "kube-proxy-g67j8" [3b82113b-7e33-4acd-80a9-21b0a7b91d13] Running
	I0729 12:24:29.168956  164458 system_pods.go:89] "kube-scheduler-pause-737279" [e133ec5f-b9ac-4223-be31-8723de7bb5b6] Running
	I0729 12:24:29.168987  164458 system_pods.go:126] duration metric: took 201.529279ms to wait for k8s-apps to be running ...
	I0729 12:24:29.168997  164458 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 12:24:29.169045  164458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:24:29.188861  164458 system_svc.go:56] duration metric: took 19.846205ms WaitForService to wait for kubelet
	I0729 12:24:29.188897  164458 kubeadm.go:582] duration metric: took 2.674210197s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 12:24:29.188925  164458 node_conditions.go:102] verifying NodePressure condition ...
	I0729 12:24:29.367267  164458 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 12:24:29.367300  164458 node_conditions.go:123] node cpu capacity is 2
	I0729 12:24:29.367317  164458 node_conditions.go:105] duration metric: took 178.385301ms to run NodePressure ...
	I0729 12:24:29.367333  164458 start.go:241] waiting for startup goroutines ...
	I0729 12:24:29.367341  164458 start.go:246] waiting for cluster config update ...
	I0729 12:24:29.367348  164458 start.go:255] writing updated cluster config ...
	I0729 12:24:29.485025  164458 ssh_runner.go:195] Run: rm -f paused
	I0729 12:24:29.537003  164458 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 12:24:29.659545  164458 out.go:177] * Done! kubectl is now configured to use "pause-737279" cluster and "default" namespace by default
	I0729 12:24:26.531017  164647 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.15312777s)
	I0729 12:24:26.531046  164647 crio.go:469] duration metric: took 2.153302942s to extract the tarball
	I0729 12:24:26.531056  164647 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 12:24:26.588623  164647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 12:24:26.634356  164647 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 12:24:26.634381  164647 cache_images.go:84] Images are preloaded, skipping loading
	I0729 12:24:26.634393  164647 kubeadm.go:934] updating node { 192.168.50.36 8443 v1.31.0-beta.0 crio true true} ...
	I0729 12:24:26.634524  164647 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-714444 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 12:24:26.634619  164647 ssh_runner.go:195] Run: crio config
	I0729 12:24:26.697306  164647 cni.go:84] Creating CNI manager for ""
	I0729 12:24:26.697337  164647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 12:24:26.697349  164647 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 12:24:26.697381  164647 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-714444 NodeName:kubernetes-upgrade-714444 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 12:24:26.697594  164647 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-714444"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 12:24:26.697669  164647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 12:24:26.710982  164647 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 12:24:26.711069  164647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 12:24:26.724013  164647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0729 12:24:26.745429  164647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 12:24:26.767543  164647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0729 12:24:26.790281  164647 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0729 12:24:26.795213  164647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 12:24:26.811681  164647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 12:24:26.959468  164647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 12:24:26.979324  164647 certs.go:68] Setting up /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444 for IP: 192.168.50.36
	I0729 12:24:26.979352  164647 certs.go:194] generating shared ca certs ...
	I0729 12:24:26.979374  164647 certs.go:226] acquiring lock for ca certs: {Name:mk26186aa21329546c893ec8355e9e5f4d1d89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:26.979565  164647 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key
	I0729 12:24:26.979630  164647 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key
	I0729 12:24:26.979644  164647 certs.go:256] generating profile certs ...
	I0729 12:24:26.979751  164647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.key
	I0729 12:24:26.979833  164647 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key.24ba74ec
	I0729 12:24:26.979891  164647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.key
	I0729 12:24:26.980039  164647 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem (1338 bytes)
	W0729 12:24:26.980078  164647 certs.go:480] ignoring /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963_empty.pem, impossibly tiny 0 bytes
	I0729 12:24:26.980091  164647 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 12:24:26.980121  164647 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/ca.pem (1082 bytes)
	I0729 12:24:26.980151  164647 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/cert.pem (1123 bytes)
	I0729 12:24:26.980184  164647 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/certs/key.pem (1675 bytes)
	I0729 12:24:26.980235  164647 certs.go:484] found cert: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem (1708 bytes)
	I0729 12:24:26.981185  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 12:24:27.028825  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 12:24:27.062557  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 12:24:27.098747  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 12:24:27.124697  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 12:24:27.152579  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 12:24:27.176210  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 12:24:27.200454  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 12:24:27.225122  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/ssl/certs/1209632.pem --> /usr/share/ca-certificates/1209632.pem (1708 bytes)
	I0729 12:24:27.249465  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 12:24:27.273921  164647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19336-113730/.minikube/certs/120963.pem --> /usr/share/ca-certificates/120963.pem (1338 bytes)
	I0729 12:24:27.298018  164647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 12:24:27.314858  164647 ssh_runner.go:195] Run: openssl version
	I0729 12:24:27.321114  164647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 12:24:27.332057  164647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:24:27.336798  164647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:46 /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:24:27.336875  164647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 12:24:27.342558  164647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 12:24:27.353007  164647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/120963.pem && ln -fs /usr/share/ca-certificates/120963.pem /etc/ssl/certs/120963.pem"
	I0729 12:24:27.363643  164647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/120963.pem
	I0729 12:24:27.368805  164647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:26 /usr/share/ca-certificates/120963.pem
	I0729 12:24:27.368873  164647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/120963.pem
	I0729 12:24:27.374646  164647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/120963.pem /etc/ssl/certs/51391683.0"
	I0729 12:24:27.386713  164647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1209632.pem && ln -fs /usr/share/ca-certificates/1209632.pem /etc/ssl/certs/1209632.pem"
	I0729 12:24:27.397846  164647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1209632.pem
	I0729 12:24:27.402340  164647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:26 /usr/share/ca-certificates/1209632.pem
	I0729 12:24:27.402407  164647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1209632.pem
	I0729 12:24:27.408272  164647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1209632.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 12:24:27.420593  164647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 12:24:27.424981  164647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 12:24:27.430735  164647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 12:24:27.436624  164647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 12:24:27.442727  164647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 12:24:27.448850  164647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 12:24:27.454561  164647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 12:24:27.460439  164647 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-714444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-714444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:24:27.460558  164647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 12:24:27.460620  164647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:24:27.502913  164647 cri.go:89] found id: ""
	I0729 12:24:27.502982  164647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 12:24:27.512914  164647 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 12:24:27.512936  164647 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 12:24:27.512998  164647 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 12:24:27.522231  164647 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:24:27.522953  164647 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-714444" does not appear in /home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 12:24:27.523337  164647 kubeconfig.go:62] /home/jenkins/minikube-integration/19336-113730/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-714444" cluster setting kubeconfig missing "kubernetes-upgrade-714444" context setting]
	I0729 12:24:27.523891  164647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/kubeconfig: {Name:mkb219e196dca6dd8aa7af14918c6562be58786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 12:24:27.524699  164647 kapi.go:59] client config for kubernetes-upgrade-714444: &rest.Config{Host:"https://192.168.50.36:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.crt", KeyFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/profiles/kubernetes-upgrade-714444/client.key", CAFile:"/home/jenkins/minikube-integration/19336-113730/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 12:24:27.525414  164647 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 12:24:27.534651  164647 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.50.36
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-714444"
	   kubeletExtraArgs:
	     node-ip: 192.168.50.36
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta2
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	@@ -33,14 +33,12 @@
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.20.0
	+kubernetesVersion: v1.31.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	@@ -52,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: cgroupfs
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I0729 12:24:27.534669  164647 kubeadm.go:1160] stopping kube-system containers ...
	I0729 12:24:27.534684  164647 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 12:24:27.534751  164647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 12:24:27.570324  164647 cri.go:89] found id: ""
	I0729 12:24:27.570406  164647 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 12:24:27.586863  164647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 12:24:27.596451  164647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 12:24:27.596483  164647 kubeadm.go:157] found existing configuration files:
	
	I0729 12:24:27.596534  164647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 12:24:27.605323  164647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 12:24:27.605391  164647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 12:24:27.614590  164647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 12:24:27.623323  164647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 12:24:27.623406  164647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 12:24:27.632538  164647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 12:24:27.641399  164647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 12:24:27.641470  164647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 12:24:27.650830  164647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 12:24:27.659730  164647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 12:24:27.659812  164647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 12:24:27.669391  164647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 12:24:27.679221  164647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:24:27.793049  164647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:24:28.994323  164647 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.201230542s)
	I0729 12:24:28.994359  164647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:24:29.222517  164647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:24:29.295244  164647 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 12:24:29.413268  164647 api_server.go:52] waiting for apiserver process to appear ...
	I0729 12:24:29.413383  164647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:24:29.914183  164647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:24:30.414260  164647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:24:30.914282  164647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:24:30.936304  164647 api_server.go:72] duration metric: took 1.523035986s to wait for apiserver process to appear ...
	I0729 12:24:30.936343  164647 api_server.go:88] waiting for apiserver healthz status ...
	I0729 12:24:30.936367  164647 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0729 12:24:30.936909  164647 api_server.go:269] stopped: https://192.168.50.36:8443/healthz: Get "https://192.168.50.36:8443/healthz": dial tcp 192.168.50.36:8443: connect: connection refused
	
	
	==> CRI-O <==
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.811789207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255872811751375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5803a7a0-5345-441e-86df-04604c174456 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.812485601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c519fe94-4be9-4cbf-bb4f-f8b2e8d48bd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.812592071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c519fe94-4be9-4cbf-bb4f-f8b2e8d48bd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.812984740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c519fe94-4be9-4cbf-bb4f-f8b2e8d48bd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.873468260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=785b7519-543a-4623-b4e5-f9b4b2fbfd4c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.873634091Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=785b7519-543a-4623-b4e5-f9b4b2fbfd4c name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.874919851Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48a8d76f-b51f-4794-aef4-3406006f380e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.875378357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255872875345144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48a8d76f-b51f-4794-aef4-3406006f380e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.876121274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=031a49ef-7f0f-4bec-95af-7cc36125e726 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.876207155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=031a49ef-7f0f-4bec-95af-7cc36125e726 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.876559419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=031a49ef-7f0f-4bec-95af-7cc36125e726 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.945474170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1056bbcc-ec56-42a5-86dd-cc3b579a5eed name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.945581209Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1056bbcc-ec56-42a5-86dd-cc3b579a5eed name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.947282793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=558bb57b-bb25-44f2-b645-2e119d4a13ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.947841668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255872947808994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=558bb57b-bb25-44f2-b645-2e119d4a13ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.948531788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2a658db-75ec-4af8-9eff-7cbc3695ec9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.948718534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2a658db-75ec-4af8-9eff-7cbc3695ec9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:32 pause-737279 crio[3001]: time="2024-07-29 12:24:32.949530968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2a658db-75ec-4af8-9eff-7cbc3695ec9a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:33 pause-737279 crio[3001]: time="2024-07-29 12:24:33.007752659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4440425-79a1-43de-86a8-3b8e57926175 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:33 pause-737279 crio[3001]: time="2024-07-29 12:24:33.007885575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4440425-79a1-43de-86a8-3b8e57926175 name=/runtime.v1.RuntimeService/Version
	Jul 29 12:24:33 pause-737279 crio[3001]: time="2024-07-29 12:24:33.009302833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0127f9f-8747-4520-99e5-472e9e89df98 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:33 pause-737279 crio[3001]: time="2024-07-29 12:24:33.009879733Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722255873009846776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0127f9f-8747-4520-99e5-472e9e89df98 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 12:24:33 pause-737279 crio[3001]: time="2024-07-29 12:24:33.010862142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5d630d7-efbe-4c83-b4bc-0d542bca9b12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:33 pause-737279 crio[3001]: time="2024-07-29 12:24:33.010932339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5d630d7-efbe-4c83-b4bc-0d542bca9b12 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 12:24:33 pause-737279 crio[3001]: time="2024-07-29 12:24:33.011253446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8,PodSandboxId:4232586f82068f60f80e325678f9f0b117462b90cd75a680de98cc688e31fef3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722255852353122413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 192613f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8,PodSandboxId:397a3ff91ccbcea51fd43ded7fe2064f726d23282a6625d26bc8d4d3fa91e316,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722255852332363919,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd,PodSandboxId:e28884b7839695abb5ebf05cbc23a7e7ced4d00c3c35dd1936ba9b59b3dd0110,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722255848529975191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2c
f3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14,PodSandboxId:2a848534be433380de3dc31f9224327caf0bcd3e17e42bdb30f0a113a8828838,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722255848559296261,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6,PodSandboxId:606796b77d121330b6e624b20ef4dc92f95373bce564774eb647a297cf02ef4a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722255848504257220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9058e2cf19216cc93b8
bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a,PodSandboxId:134397e8bf97cb355838a9361bf5a8a177e9e08a7e72d6325af5f6d049f561b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722255848520801053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io
.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5,PodSandboxId:b0d3631d99534e1f7b6d6cc8809a0ea3cdf6c4548f30779de6e1fc98a2c51a56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722255843547171551,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-dth8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1926
13f5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff,PodSandboxId:c151a83bebd5b199317d657f7e88b65dc0586a9f9a91671b9efb629971ea5fa5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722255842864477114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6c9fc035e4d3d5f7f8cb10013da83ec,},Annotations:map[string]string{io.kubernetes.container.hash: 6858543f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6,PodSandboxId:3498d24403dd019f4acbf27f697062648ce82bfdb8bc6841f0fa8ffe2f1ecca8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722255842695569039,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller
-manager-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4931a3f23911f239ad146962d8da987f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011,PodSandboxId:cd2be879ffb11f44eed9e78a44b31098964bdb3862a35ff6384413e4125af617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722255842652436420,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-7372
79,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54952ffe2cf3b2e04ff29ddef3e56753,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a,PodSandboxId:78582a6fe3ff09c325366150217ec902169bd821bb6e5c14d1fe9d130415e67a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722255842813486433,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g67j8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 3b82113b-7e33-4acd-80a9-21b0a7b91d13,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc99aaa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7,PodSandboxId:9b98c403bbd519e766ac59262be7b47bbda7c81c4d55e2a1b0680c5ace39d8ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722255842517169736,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-737279,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9058e2cf19216cc93b8bfafdb7797839,},Annotations:map[string]string{io.kubernetes.container.hash: 4a875e37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5d630d7-efbe-4c83-b4bc-0d542bca9b12 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4c3e67f08a32f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   2                   4232586f82068       coredns-7db6d8ff4d-dth8w
	655e3e84b3825       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   20 seconds ago      Running             kube-proxy                2                   397a3ff91ccbc       kube-proxy-g67j8
	f9bd3468de7b4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago      Running             kube-controller-manager   2                   2a848534be433       kube-controller-manager-pause-737279
	68aeb8d057e91       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   24 seconds ago      Running             kube-scheduler            2                   e28884b783969       kube-scheduler-pause-737279
	4f654097c9b25       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   134397e8bf97c       etcd-pause-737279
	0451868e5e03d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago      Running             kube-apiserver            2                   606796b77d121       kube-apiserver-pause-737279
	1678bac4a7262       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   1                   b0d3631d99534       coredns-7db6d8ff4d-dth8w
	67f5680233239       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Exited              etcd                      1                   c151a83bebd5b       etcd-pause-737279
	aafd1b2987099       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   30 seconds ago      Exited              kube-proxy                1                   78582a6fe3ff0       kube-proxy-g67j8
	4cc84da55d395       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   30 seconds ago      Exited              kube-controller-manager   1                   3498d24403dd0       kube-controller-manager-pause-737279
	b15bd5a323ff1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   30 seconds ago      Exited              kube-scheduler            1                   cd2be879ffb11       kube-scheduler-pause-737279
	0f72f27c65ddb       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   30 seconds ago      Exited              kube-apiserver            1                   9b98c403bbd51       kube-apiserver-pause-737279
	
	
	==> coredns [1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5] <==
	
	
	==> coredns [4c3e67f08a32fafc17f995bfb93ecb98ff9d78eab781d6850dfa89258f3706f8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35482 - 22485 "HINFO IN 6284925747029914226.1876960486442796283. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019325941s
	
	
	==> describe nodes <==
	Name:               pause-737279
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-737279
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b151275a940c006388f4657ef7f817469a6a9a53
	                    minikube.k8s.io/name=pause-737279
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T12_23_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 12:23:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-737279
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 12:24:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 12:24:11 +0000   Mon, 29 Jul 2024 12:23:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.61
	  Hostname:    pause-737279
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d93ed0c3c4842eab9e127a82053f32c
	  System UUID:                9d93ed0c-3c48-42ea-b9e1-27a82053f32c
	  Boot ID:                    18bba4ef-6047-4833-91c3-04c72a396939
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-dth8w                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     68s
	  kube-system                 etcd-pause-737279                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-pause-737279             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-pause-737279    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-g67j8                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-pause-737279             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 66s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     82s                kubelet          Node pause-737279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s                kubelet          Node pause-737279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s                kubelet          Node pause-737279 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeReady                81s                kubelet          Node pause-737279 status is now: NodeReady
	  Normal  RegisteredNode           69s                node-controller  Node pause-737279 event: Registered Node pause-737279 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-737279 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-737279 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-737279 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-737279 event: Registered Node pause-737279 in Controller
	
	
	==> dmesg <==
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060797] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.164057] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.135371] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.262481] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.526431] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.059427] kauditd_printk_skb: 130 callbacks suppressed
	[Jul29 12:23] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.064478] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.310953] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.078637] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.881201] systemd-fstab-generator[1507]: Ignoring "noauto" option for root device
	[  +0.179402] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.035259] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 12:24] systemd-fstab-generator[2354]: Ignoring "noauto" option for root device
	[  +0.096196] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.091603] systemd-fstab-generator[2366]: Ignoring "noauto" option for root device
	[  +0.463260] systemd-fstab-generator[2531]: Ignoring "noauto" option for root device
	[  +0.299157] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.614018] systemd-fstab-generator[2857]: Ignoring "noauto" option for root device
	[  +1.650287] systemd-fstab-generator[3472]: Ignoring "noauto" option for root device
	[  +2.697962] systemd-fstab-generator[3595]: Ignoring "noauto" option for root device
	[  +0.085833] kauditd_printk_skb: 244 callbacks suppressed
	[ +15.999198] kauditd_printk_skb: 50 callbacks suppressed
	[  +2.681981] systemd-fstab-generator[4039]: Ignoring "noauto" option for root device
	
	
	==> etcd [4f654097c9b250a2603be3f469cb8c13c7204eda6babbd1e026355b6afacf14a] <==
	{"level":"info","ts":"2024-07-29T12:24:08.900993Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:24:08.901043Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:24:08.921373Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:24:08.92392Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"be6e2cf5fb13c","initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T12:24:08.926739Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T12:24:08.921856Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2024-07-29T12:24:08.92744Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.61:2380"}
	{"level":"info","ts":"2024-07-29T12:24:10.060534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:10.060773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:10.06085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgPreVoteResp from be6e2cf5fb13c at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:10.060929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.060968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c received MsgVoteResp from be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.061012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became leader at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.061046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be6e2cf5fb13c elected leader be6e2cf5fb13c at term 3"}
	{"level":"info","ts":"2024-07-29T12:24:10.06631Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"be6e2cf5fb13c","local-member-attributes":"{Name:pause-737279 ClientURLs:[https://192.168.39.61:2379]}","request-path":"/0/members/be6e2cf5fb13c/attributes","cluster-id":"855213fb0218a9ad","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:24:10.066433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:24:10.066924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T12:24:10.066962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T12:24:10.066551Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T12:24:10.069332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.61:2379"}
	{"level":"info","ts":"2024-07-29T12:24:10.070806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-29T12:24:28.688546Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.843351ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12771242065836481876 > lease_revoke:<id:313c90fe72b59c1d>","response":"size:27"}
	{"level":"warn","ts":"2024-07-29T12:24:28.689038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.583373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-737279\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-29T12:24:28.688814Z","caller":"traceutil/trace.go:171","msg":"trace[1073697496] linearizableReadLoop","detail":"{readStateIndex:509; appliedIndex:508; }","duration":"134.369636ms","start":"2024-07-29T12:24:28.554428Z","end":"2024-07-29T12:24:28.688798Z","steps":["trace[1073697496] 'read index received'  (duration: 25.231µs)","trace[1073697496] 'applied index is now lower than readState.Index'  (duration: 134.342828ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T12:24:28.689112Z","caller":"traceutil/trace.go:171","msg":"trace[1031003297] range","detail":"{range_begin:/registry/minions/pause-737279; range_end:; response_count:1; response_revision:469; }","duration":"134.6959ms","start":"2024-07-29T12:24:28.554401Z","end":"2024-07-29T12:24:28.689097Z","steps":["trace[1031003297] 'agreement among raft nodes before linearized reading'  (duration: 134.549339ms)"],"step_count":1}
	
	
	==> etcd [67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff] <==
	{"level":"warn","ts":"2024-07-29T12:24:03.402636Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-29T12:24:03.402737Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.61:2380"]}
	{"level":"info","ts":"2024-07-29T12:24:03.402861Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T12:24:03.404968Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"]}
	{"level":"info","ts":"2024-07-29T12:24:03.405553Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-737279","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.61:2380"],"listen-peer-urls":["https://192.168.39.61:2380"],"advertise-client-urls":["https://192.168.39.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluste
r-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-29T12:24:03.444477Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"34.052861ms"}
	{"level":"info","ts":"2024-07-29T12:24:03.481335Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-29T12:24:03.490123Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","commit-index":413}
	{"level":"info","ts":"2024-07-29T12:24:03.492249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-29T12:24:03.492365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c became follower at term 2"}
	{"level":"info","ts":"2024-07-29T12:24:03.492396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft be6e2cf5fb13c [peers: [], term: 2, commit: 413, applied: 0, lastindex: 413, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-29T12:24:03.497428Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-29T12:24:03.515638Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":394}
	{"level":"info","ts":"2024-07-29T12:24:03.539514Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-29T12:24:03.553934Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"be6e2cf5fb13c","timeout":"7s"}
	{"level":"info","ts":"2024-07-29T12:24:03.572403Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"be6e2cf5fb13c"}
	{"level":"info","ts":"2024-07-29T12:24:03.572525Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"be6e2cf5fb13c","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T12:24:03.573067Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T12:24:03.573219Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:24:03.573256Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:24:03.573263Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T12:24:03.573478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be6e2cf5fb13c switched to configuration voters=(3350086559969596)"}
	{"level":"info","ts":"2024-07-29T12:24:03.573528Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","added-peer-id":"be6e2cf5fb13c","added-peer-peer-urls":["https://192.168.39.61:2380"]}
	{"level":"info","ts":"2024-07-29T12:24:03.573642Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"855213fb0218a9ad","local-member-id":"be6e2cf5fb13c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T12:24:03.573665Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 12:24:33 up 1 min,  0 users,  load average: 1.05, 0.37, 0.13
	Linux pause-737279 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0451868e5e03df8e67d27e1b59d451a54af8899e8089704c8d8c0620a6d355f6] <==
	I0729 12:24:11.540814       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 12:24:11.542818       1 aggregator.go:165] initial CRD sync complete...
	I0729 12:24:11.542920       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 12:24:11.542963       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 12:24:11.569914       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 12:24:11.570320       1 policy_source.go:224] refreshing policies
	I0729 12:24:11.591084       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 12:24:11.591783       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 12:24:11.596817       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 12:24:11.596855       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 12:24:11.599572       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 12:24:11.599645       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 12:24:11.607969       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0729 12:24:11.616375       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 12:24:11.643975       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 12:24:11.648307       1 cache.go:39] Caches are synced for autoregister controller
	I0729 12:24:11.667450       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 12:24:12.501177       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 12:24:13.287551       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 12:24:13.309894       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 12:24:13.367271       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 12:24:13.402991       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 12:24:13.410480       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 12:24:23.855212       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 12:24:23.956232       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7] <==
	I0729 12:24:03.052109       1 options.go:221] external host was not specified, using 192.168.39.61
	I0729 12:24:03.058408       1 server.go:148] Version: v1.30.3
	I0729 12:24:03.058480       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6] <==
	
	
	==> kube-controller-manager [f9bd3468de7b46b47012e67d8bea267f6ece234196038152eb0a59357c2b4c14] <==
	I0729 12:24:23.882407       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 12:24:23.886829       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 12:24:23.892571       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 12:24:23.896451       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 12:24:23.905324       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 12:24:23.905627       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.512µs"
	I0729 12:24:23.908797       1 shared_informer.go:320] Caches are synced for service account
	I0729 12:24:23.908818       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 12:24:23.908925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 12:24:23.909008       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 12:24:23.911459       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 12:24:23.912722       1 shared_informer.go:320] Caches are synced for taint
	I0729 12:24:23.912851       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 12:24:23.912958       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-737279"
	I0729 12:24:23.913022       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 12:24:23.916004       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 12:24:23.930435       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 12:24:23.937555       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 12:24:23.954883       1 shared_informer.go:320] Caches are synced for deployment
	I0729 12:24:23.992936       1 shared_informer.go:320] Caches are synced for disruption
	I0729 12:24:24.096222       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:24:24.105861       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 12:24:24.533489       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:24:24.536884       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 12:24:24.536935       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [655e3e84b38256666c14c450905c69a8ec9544c3316cfa727ce63abc5e377af8] <==
	I0729 12:24:12.574324       1 server_linux.go:69] "Using iptables proxy"
	I0729 12:24:12.599213       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.61"]
	I0729 12:24:12.649937       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 12:24:12.650073       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 12:24:12.650150       1 server_linux.go:165] "Using iptables Proxier"
	I0729 12:24:12.654337       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 12:24:12.654827       1 server.go:872] "Version info" version="v1.30.3"
	I0729 12:24:12.655206       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:24:12.656973       1 config.go:192] "Starting service config controller"
	I0729 12:24:12.657486       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 12:24:12.657571       1 config.go:101] "Starting endpoint slice config controller"
	I0729 12:24:12.657602       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 12:24:12.658143       1 config.go:319] "Starting node config controller"
	I0729 12:24:12.658180       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 12:24:12.758801       1 shared_informer.go:320] Caches are synced for node config
	I0729 12:24:12.758866       1 shared_informer.go:320] Caches are synced for service config
	I0729 12:24:12.758891       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a] <==
	
	
	==> kube-scheduler [68aeb8d057e916a2b036f29edbeb0d6d79f5b06cc5fc4748f673150ca42a98fd] <==
	I0729 12:24:09.456822       1 serving.go:380] Generated self-signed cert in-memory
	W0729 12:24:11.550102       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 12:24:11.550206       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 12:24:11.550221       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 12:24:11.550229       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 12:24:11.579270       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 12:24:11.579332       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 12:24:11.589635       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 12:24:11.602858       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 12:24:11.602969       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 12:24:11.603029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 12:24:11.703596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011] <==
	
	
	==> kubelet <==
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.222900    3602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9058e2cf19216cc93b8bfafdb7797839-usr-share-ca-certificates\") pod \"kube-apiserver-pause-737279\" (UID: \"9058e2cf19216cc93b8bfafdb7797839\") " pod="kube-system/kube-apiserver-pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.230189    3602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-737279?timeout=10s\": dial tcp 192.168.39.61:8443: connect: connection refused" interval="400ms"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.323273    3602 kubelet_node_status.go:73] "Attempting to register node" node="pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.325448    3602 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.487861    3602 scope.go:117] "RemoveContainer" containerID="67f56802332398c5be8c2d6de8bbdc4ad1f4b05013c70e3d597a14e47d3600ff"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.490053    3602 scope.go:117] "RemoveContainer" containerID="0f72f27c65ddbb172ba8f36bc210278d065869f8fe318d9c85bb238e7dd24bc7"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.491644    3602 scope.go:117] "RemoveContainer" containerID="4cc84da55d39594de2e49d914a1c65bc1d01d41a921cd03990cce33c1963ffa6"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.493476    3602 scope.go:117] "RemoveContainer" containerID="b15bd5a323ff1f38063711110c759271da42060ef9ccc308d09ebaab04bad011"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.631259    3602 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-737279?timeout=10s\": dial tcp 192.168.39.61:8443: connect: connection refused" interval="800ms"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: I0729 12:24:08.728479    3602 kubelet_node_status.go:73] "Attempting to register node" node="pause-737279"
	Jul 29 12:24:08 pause-737279 kubelet[3602]: E0729 12:24:08.729366    3602 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.61:8443: connect: connection refused" node="pause-737279"
	Jul 29 12:24:09 pause-737279 kubelet[3602]: I0729 12:24:09.531419    3602 kubelet_node_status.go:73] "Attempting to register node" node="pause-737279"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.649613    3602 kubelet_node_status.go:112] "Node was previously registered" node="pause-737279"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.649744    3602 kubelet_node_status.go:76] "Successfully registered node" node="pause-737279"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.651321    3602 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 12:24:11 pause-737279 kubelet[3602]: I0729 12:24:11.652618    3602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.001746    3602 apiserver.go:52] "Watching apiserver"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.006304    3602 topology_manager.go:215] "Topology Admit Handler" podUID="3b82113b-7e33-4acd-80a9-21b0a7b91d13" podNamespace="kube-system" podName="kube-proxy-g67j8"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.008414    3602 topology_manager.go:215] "Topology Admit Handler" podUID="9ab70fb6-1e3d-4624-8b9f-fab998fc1cc3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dth8w"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.017070    3602 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.060145    3602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b82113b-7e33-4acd-80a9-21b0a7b91d13-lib-modules\") pod \"kube-proxy-g67j8\" (UID: \"3b82113b-7e33-4acd-80a9-21b0a7b91d13\") " pod="kube-system/kube-proxy-g67j8"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.060332    3602 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b82113b-7e33-4acd-80a9-21b0a7b91d13-xtables-lock\") pod \"kube-proxy-g67j8\" (UID: \"3b82113b-7e33-4acd-80a9-21b0a7b91d13\") " pod="kube-system/kube-proxy-g67j8"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.311008    3602 scope.go:117] "RemoveContainer" containerID="aafd1b298709944185d6e326e15e50ab8a453e066b90910822fc1907a612758a"
	Jul 29 12:24:12 pause-737279 kubelet[3602]: I0729 12:24:12.316979    3602 scope.go:117] "RemoveContainer" containerID="1678bac4a7262731d5272b7154d88310fccf52f537be0a1c46d69868cc5fc9f5"
	Jul 29 12:24:14 pause-737279 kubelet[3602]: I0729 12:24:14.665938    3602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-737279 -n pause-737279
helpers_test.go:261: (dbg) Run:  kubectl --context pause-737279 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (55.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.054s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
E0729 12:44:27.393820  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.247:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.247:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (21m8s)
	TestStartStop (23m14s)
	TestStartStop/group/default-k8s-diff-port (18m20s)
	TestStartStop/group/default-k8s-diff-port/serial (18m20s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6m28s)
	TestStartStop/group/embed-certs (20m30s)
	TestStartStop/group/embed-certs/serial (20m30s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6m9s)
	TestStartStop/group/no-preload (20m34s)
	TestStartStop/group/no-preload/serial (20m34s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5m4s)
	TestStartStop/group/old-k8s-version (20m55s)
	TestStartStop/group/old-k8s-version/serial (20m55s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (2m39s)

                                                
                                                
goroutine 4075 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00072cd00, 0xc00096fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000012900, {0x49d1120, 0x2b, 0x2b}, {0x26b5f62?, 0xc000175b00?, 0x4a8da60?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00090f860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00090f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00066ef00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2156 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009a8d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009a8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009a8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0009a8d00, 0xc0016d4100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 32 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 31
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 450 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 353
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2406 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001a65ba0, {0x268737b?, 0x60400000004?}, 0xc00066e000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a65ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a65ba0, 0xc00070d580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1958
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2709 [IO wait]:
internal/poll.runtime_pollWait(0x7f143c3a1c78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00070d380?, 0xc001694800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00070d380, {0xc001694800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00070d380, {0xc001694800?, 0x7f142c6c1e50?, 0xc001406b40?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001706068, {0xc001694800?, 0xc0012cd938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001406b40, {0xc001694800?, 0x0?, 0xc001406b40?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0001bb7b0, {0x3696120, 0xc001406b40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001bb508, {0x3695500, 0xc001706068}, 0xc0012cd980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0001bb508, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0001bb508, {0xc001722000, 0x1000, 0xc001d97180?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc000af41e0, {0xc001aa2580, 0x9, 0x498cc30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3694600, 0xc000af41e0}, {0xc001aa2580, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001aa2580, 0x9, 0x12cddc0?}, {0x3694600?, 0xc000af41e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001aa2540)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0012cdfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0014ec780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2708
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1778 [chan receive, 21 minutes]:
testing.(*T).Run(0xc0009a8340, {0x265b5c9?, 0x55127c?}, 0xc002242198)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0009a8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0009a8340, 0x313a0e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2100 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b50d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b50d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b50d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b50d00, 0xc0001c5f00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2098 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b509c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b509c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b509c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b509c0, 0xc0001c5d80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 352 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007cac90, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019482a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007cad00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00087e490, {0x3695980, 0xc00014bb90}, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00087e490, 0x3b9aca00, 0x0, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 446
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 246 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f143c3a2150, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001c5e00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0001c5e00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000992cc0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000992cc0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00087c0f0, {0x36ac860, 0xc000992cc0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00087c0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00072d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 243
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1959 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006996c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006996c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006996c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0006996c0, 0xc0006d1a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1957
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 353 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9a00, 0xc000060420}, 0xc0014de750, 0xc00169cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9a00, 0xc000060420}, 0x20?, 0xc0014de750, 0xc0014de798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9a00?, 0xc000060420?}, 0x70616b2032343831?, 0x5d36393a6f672e69?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc00134ac00?, 0xc001372a20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 446
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2440 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0013ac4c0, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2414
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2531 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9a00, 0xc000060420}, 0xc000506750, 0xc0016eff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9a00, 0xc000060420}, 0x80?, 0xc000506750, 0xc000506798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9a00?, 0xc000060420?}, 0xc001a644e0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc00134ac00?, 0xc0019df080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 734 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc001633200, 0xc000153bc0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 733
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2396 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9a00, 0xc000060420}, 0xc00050b750, 0xc001699f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9a00, 0xc000060420}, 0x8?, 0xc00050b750, 0xc00050b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9a00?, 0xc000060420?}, 0xc001a64680?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00050b7d0?, 0x592e44?, 0xc0017fe080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2511 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007cbb00, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2506
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 697 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013f4a80, 0xc0001521e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 332
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2155 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009a8680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009a8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009a8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0009a8680, 0xc0016d4080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 624 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bb8c00, 0xc0019df020)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 623
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1958 [chan receive, 21 minutes]:
testing.(*T).Run(0xc000698ea0, {0x265cb74?, 0x0?}, 0xc00070d580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000698ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000698ea0, 0xc0006d19c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1957
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 442 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc00134b200, 0xc001372f60)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 441
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 445 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019483c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 444
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 446 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007cad00, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 444
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1802 [chan receive, 23 minutes]:
testing.(*T).Run(0xc0009a9380, {0x265b5c9?, 0x551133?}, 0x313a300)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0009a9380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0009a9380, 0x313a128)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 815 [select, 76 minutes]:
net/http.(*persistConn).readLoop(0xc0015f37a0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 822
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2606 [IO wait]:
internal/poll.runtime_pollWait(0x7f143c3a2438, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00070d000?, 0xc000973000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00070d000, {0xc000973000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00070d000, {0xc000973000?, 0xc000718c80?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001706070, {0xc000973000?, 0xc00097305f?, 0x70?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001406ae0, {0xc000973000?, 0x0?, 0xc001406ae0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0001bb430, {0x3696120, 0xc001406ae0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001bb188, {0x7f142c72ce98, 0xc001e3e1b0}, 0xc00169b980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0001bb188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0001bb188, {0xc00069f000, 0x1000, 0xc001d97180?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00098df20, {0xc00072a820, 0x9, 0x498cc30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3694600, 0xc00098df20}, {0xc00072a820, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00072a820, 0x9, 0x169bdc0?}, {0x3694600?, 0xc00098df20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00072a7e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00169bfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0014ec180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2605
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 816 [select, 76 minutes]:
net/http.(*persistConn).writeLoop(0xc0015f37a0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 822
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2099 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b50b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b50b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b50b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b50b60, 0xc0001c5e80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2219 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0016a21a0, {0x268737b?, 0x60400000004?}, 0xc00066fe00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0016a21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0016a21a0, 0xc00066e900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1961
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3284 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0014c8740, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3282
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1977 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001b501a0, 0xc002242198)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1778
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2499 [chan receive, 8 minutes]:
testing.(*T).Run(0xc001a64b60, {0x268737b?, 0x60400000004?}, 0xc00070c000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a64b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a64b60, 0xc00066e600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1960
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1961 [chan receive, 21 minutes]:
testing.(*T).Run(0xc000699a00, {0x265cb74?, 0x0?}, 0xc00066e900)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000699a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000699a00, 0xc0006d1a80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1957
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3278 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3277
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2101 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b51040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b51040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b51040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b51040, 0xc0007c4700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1960 [chan receive, 18 minutes]:
testing.(*T).Run(0xc000699860, {0x265cb74?, 0x0?}, 0xc00066e600)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000699860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000699860, 0xc0006d1a40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1957
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1957 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000698d00, 0x313a300)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1802
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3283 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016dd260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3282
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1978 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0008688c0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b50340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b50340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b50340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b50340, 0xc0006c8d00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1977
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2422 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0016a24e0, {0x268737b?, 0x60400000004?}, 0xc00070c300)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0016a24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0016a24e0, 0xc00066e200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1963
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2397 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2396
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2852 [IO wait]:
internal/poll.runtime_pollWait(0x7f142c7ad510, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0015e8b80?, 0xc001695000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0015e8b80, {0xc001695000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0015e8b80, {0xc001695000?, 0xc00150f180?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0017063d8, {0xc001695000?, 0xc00169505f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001406ba0, {0xc001695000?, 0x0?, 0xc001406ba0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00143e630, {0x3696120, 0xc001406ba0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00143e388, {0x7f142c72ce98, 0xc001e3f740}, 0xc00133b980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00143e388, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00143e388, {0xc001703000, 0x1000, 0xc001d97180?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00014d500, {0xc001aa2820, 0x9, 0x498cc30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3694600, 0xc00014d500}, {0xc001aa2820, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001aa2820, 0x9, 0x133bdc0?}, {0x3694600?, 0xc00014d500?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001aa27e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00133bfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001524900)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2851
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1963 [chan receive, 20 minutes]:
testing.(*T).Run(0xc000699d40, {0x265cb74?, 0x0?}, 0xc00066e200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000699d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000699d40, 0xc0006d1b80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1957
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2395 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0013ac490, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0017d4e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0013ac4c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009281b0, {0x3695980, 0xc001d66180}, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009281b0, 0x3b9aca00, 0x0, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2440
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2439 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017d4f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2414
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2690 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b9840, 0xc000539dc0}, {0x36acf20, 0xc0019ba6c0}, 0x1, 0x0, 0xc001d6fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b9840?, 0xc00046a070?}, 0x3b9aca00, 0xc001347e10?, 0x1, 0xc001347c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b9840, 0xc00046a070}, 0xc00029c820, {0xc0019960c0, 0x12}, {0x26815ff, 0x14}, {0x26991bf, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b9840, 0xc00046a070}, 0xc00029c820, {0xc0019960c0, 0x12}, {0x26689a1?, 0xc00050b760?}, {0x551133?, 0x4a170f?}, {0xc000287b00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00029c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00029c820, 0xc00070c300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2422
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2816 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b9840, 0xc0004b8e00}, {0x36acf20, 0xc00168e4c0}, 0x1, 0x0, 0xc001d6bc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b9840?, 0xc0007c8f50?}, 0x3b9aca00, 0xc0015d1e10?, 0x1, 0xc0015d1c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b9840, 0xc0007c8f50}, 0xc0016a2000, {0xc0019725d0, 0x11}, {0x26815ff, 0x14}, {0x26991bf, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b9840, 0xc0007c8f50}, 0xc0016a2000, {0xc0019725d0, 0x11}, {0x2666793?, 0xc0016bdf60?}, {0x551133?, 0x4a170f?}, {0xc000287c00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0016a2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0016a2000, 0xc00066fe00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2219
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3282 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b9840, 0xc0001990a0}, {0x36acf20, 0xc000533840}, 0x1, 0x0, 0xc0012dfc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b9840?, 0xc0004dd2d0?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b9840, 0xc0004dd2d0}, 0xc00029cb60, {0xc001a15f80, 0x16}, {0x26815ff, 0x14}, {0x26991bf, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b9840, 0xc0004dd2d0}, 0xc00029cb60, {0xc001a15f80, 0x16}, {0x267293d?, 0xc0016bdf60?}, {0x551133?, 0x4a170f?}, {0xc001be8600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00029cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00029cb60, 0xc00066e000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2406
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3277 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9a00, 0xc000060420}, 0xc000099f50, 0xc000099f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9a00, 0xc000060420}, 0xa0?, 0xc000099f50, 0xc000099f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9a00?, 0xc000060420?}, 0xc001a65d40?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000099fd0?, 0x592e44?, 0xc001d068a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3284
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2532 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2531
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2665 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b9840, 0xc0007c9880}, {0x36acf20, 0xc001a09320}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b9840?, 0xc000538150?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b9840, 0xc000538150}, 0xc00029c1a0, {0xc001395000, 0x1c}, {0x26815ff, 0x14}, {0x26991bf, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b9840, 0xc000538150}, 0xc00029c1a0, {0xc001395000, 0x1c}, {0x26844f9?, 0xc00050b760?}, {0x551133?, 0x4a170f?}, {0xc001626600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00029c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00029c1a0, 0xc00070c000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2499
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2510 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000af5c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2506
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2530 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007cbad0, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000af59e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007cbb00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020e1180, {0x3695980, 0xc001ba06f0}, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0020e1180, 0x3b9aca00, 0x0, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3276 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0014c8710, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0016dd140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014c8740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006ae9c0, {0x3695980, 0xc00014a000}, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006ae9c0, 0x3b9aca00, 0x0, 0x1, 0xc000060420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3284
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                    

Test pass (175/216)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.34
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 5.16
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 4.34
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 122.02
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
37 TestCertOptions 55.65
38 TestCertExpiration 291.26
40 TestForceSystemdFlag 54.01
41 TestForceSystemdEnv 43.82
43 TestKVMDriverInstallOrUpdate 3.66
47 TestErrorSpam/setup 38.47
48 TestErrorSpam/start 0.37
49 TestErrorSpam/status 0.74
50 TestErrorSpam/pause 1.52
51 TestErrorSpam/unpause 1.49
52 TestErrorSpam/stop 4.77
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 67.65
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 37.98
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.84
64 TestFunctional/serial/CacheCmd/cache/add_local 1.97
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
69 TestFunctional/serial/CacheCmd/cache/delete 0.1
70 TestFunctional/serial/MinikubeKubectlCmd 0.11
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 32.51
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.35
75 TestFunctional/serial/LogsFileCmd 1.38
76 TestFunctional/serial/InvalidService 4.34
78 TestFunctional/parallel/ConfigCmd 0.36
79 TestFunctional/parallel/DashboardCmd 16.81
80 TestFunctional/parallel/DryRun 0.43
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 0.76
86 TestFunctional/parallel/ServiceCmdConnect 17.66
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 43.06
90 TestFunctional/parallel/SSHCmd 0.42
91 TestFunctional/parallel/CpCmd 1.27
92 TestFunctional/parallel/MySQL 27.41
93 TestFunctional/parallel/FileSync 0.23
94 TestFunctional/parallel/CertSync 1.27
98 TestFunctional/parallel/NodeLabels 0.08
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
102 TestFunctional/parallel/License 0.22
103 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.82
111 TestFunctional/parallel/ImageCommands/Setup 1.55
112 TestFunctional/parallel/ProfileCmd/profile_list 0.34
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.25
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.73
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.56
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.62
124 TestFunctional/parallel/ImageCommands/ImageRemove 1.38
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.37
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.6
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
133 TestFunctional/parallel/MountCmd/any-port 18.63
134 TestFunctional/parallel/MountCmd/specific-port 1.99
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
136 TestFunctional/parallel/Version/short 0.05
137 TestFunctional/parallel/Version/components 0.43
138 TestFunctional/parallel/ServiceCmd/DeployApp 12.18
139 TestFunctional/parallel/ServiceCmd/List 1.63
140 TestFunctional/parallel/ServiceCmd/JSONOutput 1.62
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
142 TestFunctional/parallel/ServiceCmd/Format 0.46
143 TestFunctional/parallel/ServiceCmd/URL 0.47
144 TestFunctional/delete_echo-server_images 0.04
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestMultiControlPlane/serial/StartCluster 203.13
151 TestMultiControlPlane/serial/DeployApp 7.09
152 TestMultiControlPlane/serial/PingHostFromPods 1.2
153 TestMultiControlPlane/serial/AddWorkerNode 55.88
154 TestMultiControlPlane/serial/NodeLabels 0.07
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
156 TestMultiControlPlane/serial/CopyFile 12.75
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
162 TestMultiControlPlane/serial/DeleteSecondaryNode 17.08
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
165 TestMultiControlPlane/serial/RestartCluster 343.49
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
167 TestMultiControlPlane/serial/AddSecondaryNode 71.62
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
172 TestJSONOutput/start/Command 54.15
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.67
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.58
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 6.66
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.2
200 TestMainNoArgs 0.05
201 TestMinikubeProfile 84.8
204 TestMountStart/serial/StartWithMountFirst 30.03
205 TestMountStart/serial/VerifyMountFirst 0.37
206 TestMountStart/serial/StartWithMountSecond 27.28
207 TestMountStart/serial/VerifyMountSecond 0.37
208 TestMountStart/serial/DeleteFirst 0.68
209 TestMountStart/serial/VerifyMountPostDelete 0.38
210 TestMountStart/serial/Stop 1.28
211 TestMountStart/serial/RestartStopped 23.28
212 TestMountStart/serial/VerifyMountPostStop 0.36
215 TestMultiNode/serial/FreshStart2Nodes 120.27
216 TestMultiNode/serial/DeployApp2Nodes 4.1
217 TestMultiNode/serial/PingHostFrom2Pods 0.77
218 TestMultiNode/serial/AddNode 45.79
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.23
221 TestMultiNode/serial/CopyFile 7.34
222 TestMultiNode/serial/StopNode 2.24
223 TestMultiNode/serial/StartAfterStop 38.59
225 TestMultiNode/serial/DeleteNode 2.31
227 TestMultiNode/serial/RestartMultiNode 206.37
228 TestMultiNode/serial/ValidateNameConflict 42.85
235 TestScheduledStopUnix 110.53
239 TestRunningBinaryUpgrade 142.84
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestNoKubernetes/serial/StartWithK8s 116.12
246 TestStoppedBinaryUpgrade/Setup 0.84
247 TestStoppedBinaryUpgrade/Upgrade 170.24
248 TestNoKubernetes/serial/StartWithStopK8s 10.55
249 TestNoKubernetes/serial/Start 44.58
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
251 TestNoKubernetes/serial/ProfileList 6.68
252 TestNoKubernetes/serial/Stop 1.65
253 TestNoKubernetes/serial/StartNoArgs 34.52
254 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
255 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
264 TestPause/serial/Start 68.82
x
+
TestDownloadOnly/v1.20.0/json-events (9.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-160437 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-160437 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.34004351s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-160437
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-160437: exit status 85 (61.54475ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-160437 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC |          |
	|         | -p download-only-160437        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:45:43
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:45:43.612417  120975 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:43.612533  120975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:43.612542  120975 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:43.612546  120975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:43.612732  120975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	W0729 10:45:43.612848  120975 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19336-113730/.minikube/config/config.json: open /home/jenkins/minikube-integration/19336-113730/.minikube/config/config.json: no such file or directory
	I0729 10:45:43.613484  120975 out.go:298] Setting JSON to true
	I0729 10:45:43.614402  120975 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1695,"bootTime":1722248249,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:45:43.614462  120975 start.go:139] virtualization: kvm guest
	I0729 10:45:43.616722  120975 out.go:97] [download-only-160437] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 10:45:43.616843  120975 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 10:45:43.616892  120975 notify.go:220] Checking for updates...
	I0729 10:45:43.618165  120975 out.go:169] MINIKUBE_LOCATION=19336
	I0729 10:45:43.619501  120975 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:43.620802  120975 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 10:45:43.622061  120975 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 10:45:43.623195  120975 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 10:45:43.625356  120975 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:45:43.625577  120975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:43.722147  120975 out.go:97] Using the kvm2 driver based on user configuration
	I0729 10:45:43.722179  120975 start.go:297] selected driver: kvm2
	I0729 10:45:43.722188  120975 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:45:43.722533  120975 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:43.722693  120975 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:45:43.737855  120975 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:45:43.737933  120975 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:45:43.738427  120975 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 10:45:43.738592  120975 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:45:43.738647  120975 cni.go:84] Creating CNI manager for ""
	I0729 10:45:43.738660  120975 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:45:43.738669  120975 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:45:43.738734  120975 start.go:340] cluster config:
	{Name:download-only-160437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-160437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:43.738956  120975 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:43.740780  120975 out.go:97] Downloading VM boot image ...
	I0729 10:45:43.740817  120975 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 10:45:45.971223  120975 out.go:97] Starting "download-only-160437" primary control-plane node in "download-only-160437" cluster
	I0729 10:45:45.971253  120975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 10:45:45.997489  120975 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 10:45:45.997521  120975 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:45.997701  120975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 10:45:45.999501  120975 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 10:45:45.999522  120975 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:45:46.024274  120975 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-160437 host does not exist
	  To start a cluster, run: "minikube start -p download-only-160437"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-160437
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-035784 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-035784 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.158991361s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (5.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-035784
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-035784: exit status 85 (61.064775ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-160437 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC |                     |
	|         | -p download-only-160437        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC | 29 Jul 24 10:45 UTC |
	| delete  | -p download-only-160437        | download-only-160437 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC | 29 Jul 24 10:45 UTC |
	| start   | -o=json --download-only        | download-only-035784 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC |                     |
	|         | -p download-only-035784        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:45:53
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:45:53.273221  121161 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:53.273480  121161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:53.273490  121161 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:53.273497  121161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:53.273697  121161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 10:45:53.274286  121161 out.go:298] Setting JSON to true
	I0729 10:45:53.275218  121161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1704,"bootTime":1722248249,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:45:53.275281  121161 start.go:139] virtualization: kvm guest
	I0729 10:45:53.277551  121161 out.go:97] [download-only-035784] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:45:53.277718  121161 notify.go:220] Checking for updates...
	I0729 10:45:53.279047  121161 out.go:169] MINIKUBE_LOCATION=19336
	I0729 10:45:53.280579  121161 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:53.281950  121161 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 10:45:53.283412  121161 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 10:45:53.284697  121161 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 10:45:53.287276  121161 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:45:53.287498  121161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:45:53.319734  121161 out.go:97] Using the kvm2 driver based on user configuration
	I0729 10:45:53.319763  121161 start.go:297] selected driver: kvm2
	I0729 10:45:53.319771  121161 start.go:901] validating driver "kvm2" against <nil>
	I0729 10:45:53.320114  121161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:53.320212  121161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19336-113730/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 10:45:53.337106  121161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 10:45:53.337200  121161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:45:53.337930  121161 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 10:45:53.338154  121161 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:45:53.338225  121161 cni.go:84] Creating CNI manager for ""
	I0729 10:45:53.338240  121161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 10:45:53.338250  121161 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:45:53.338325  121161 start.go:340] cluster config:
	{Name:download-only-035784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-035784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:45:53.338448  121161 iso.go:125] acquiring lock: {Name:mk2759c73d87a363c74da6ee3415f9d626473ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:45:53.340403  121161 out.go:97] Starting "download-only-035784" primary control-plane node in "download-only-035784" cluster
	I0729 10:45:53.340427  121161 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:45:53.395877  121161 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:45:53.395902  121161 cache.go:56] Caching tarball of preloaded images
	I0729 10:45:53.396046  121161 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:45:53.397932  121161 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 10:45:53.397953  121161 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:45:53.433720  121161 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 10:45:57.027716  121161 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:45:57.027813  121161 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19336-113730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 10:45:57.789629  121161 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 10:45:57.789977  121161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/download-only-035784/config.json ...
	I0729 10:45:57.790007  121161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/download-only-035784/config.json: {Name:mkbda2dd79b5f7becd4eda271bb6b4edb5a2d63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:45:57.790163  121161 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 10:45:57.790285  121161 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19336-113730/.minikube/cache/linux/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-035784 host does not exist
	  To start a cluster, run: "minikube start -p download-only-035784"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-035784
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (4.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-727179 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-727179 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.339608927s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (4.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-727179
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-727179: exit status 85 (57.885144ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-160437 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC |                     |
	|         | -p download-only-160437             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC | 29 Jul 24 10:45 UTC |
	| delete  | -p download-only-160437             | download-only-160437 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC | 29 Jul 24 10:45 UTC |
	| start   | -o=json --download-only             | download-only-035784 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC |                     |
	|         | -p download-only-035784             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC | 29 Jul 24 10:45 UTC |
	| delete  | -p download-only-035784             | download-only-035784 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC | 29 Jul 24 10:45 UTC |
	| start   | -o=json --download-only             | download-only-727179 | jenkins | v1.33.1 | 29 Jul 24 10:45 UTC |                     |
	|         | -p download-only-727179             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:45:58
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:45:58.757038  121367 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:45:58.757166  121367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:58.757174  121367 out.go:304] Setting ErrFile to fd 2...
	I0729 10:45:58.757179  121367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:45:58.757397  121367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 10:45:58.758000  121367 out.go:298] Setting JSON to true
	I0729 10:45:58.758859  121367 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1710,"bootTime":1722248249,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 10:45:58.758922  121367 start.go:139] virtualization: kvm guest
	I0729 10:45:58.761160  121367 out.go:97] [download-only-727179] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 10:45:58.761373  121367 notify.go:220] Checking for updates...
	I0729 10:45:58.762573  121367 out.go:169] MINIKUBE_LOCATION=19336
	I0729 10:45:58.764062  121367 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:45:58.765575  121367 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 10:45:58.766912  121367 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 10:45:58.768411  121367 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-727179 host does not exist
	  To start a cluster, run: "minikube start -p download-only-727179"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-727179
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-865894 --alsologtostderr --binary-mirror http://127.0.0.1:34365 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-865894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-865894
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (122.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-390530 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-390530 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m1.237737828s)
helpers_test.go:175: Cleaning up "offline-crio-390530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-390530
--- PASS: TestOffline (122.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-693556
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-693556: exit status 85 (48.439543ms)

                                                
                                                
-- stdout --
	* Profile "addons-693556" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-693556"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-693556
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-693556: exit status 85 (46.861577ms)

                                                
                                                
-- stdout --
	* Profile "addons-693556" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-693556"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (55.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-882510 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0729 12:24:27.394232  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-882510 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (53.973823583s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-882510 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-882510 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-882510 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-882510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-882510
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-882510: (1.13899133s)
--- PASS: TestCertOptions (55.65s)

                                                
                                    
x
+
TestCertExpiration (291.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-524248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-524248 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m10.80540754s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-524248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-524248 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.428729295s)
helpers_test.go:175: Cleaning up "cert-expiration-524248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-524248
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-524248: (1.026095988s)
--- PASS: TestCertExpiration (291.26s)

                                                
                                    
x
+
TestForceSystemdFlag (54.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-327451 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-327451 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.682357489s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-327451 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-327451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-327451
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-327451: (1.086774804s)
--- PASS: TestForceSystemdFlag (54.01s)

                                                
                                    
x
+
TestForceSystemdEnv (43.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-452635 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-452635 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.036595584s)
helpers_test.go:175: Cleaning up "force-systemd-env-452635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-452635
--- PASS: TestForceSystemdEnv (43.82s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.66s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.66s)

                                                
                                    
x
+
TestErrorSpam/setup (38.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-181391 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-181391 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-181391 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-181391 --driver=kvm2  --container-runtime=crio: (38.471110477s)
--- PASS: TestErrorSpam/setup (38.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (4.77s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 stop: (1.478832615s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 stop: (1.45616053s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-181391 --log_dir /tmp/nospam-181391 stop: (1.833773766s)
--- PASS: TestErrorSpam/stop (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19336-113730/.minikube/files/etc/test/nested/copy/120963/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-577059 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-577059 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m7.645408053s)
--- PASS: TestFunctional/serial/StartWithProxy (67.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-577059 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-577059 --alsologtostderr -v=8: (37.979616897s)
functional_test.go:659: soft start took 37.980430242s for "functional-577059" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-577059 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 cache add registry.k8s.io/pause:3.1: (1.480584871s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 cache add registry.k8s.io/pause:3.3: (1.247757794s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 cache add registry.k8s.io/pause:latest: (1.109704847s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-577059 /tmp/TestFunctionalserialCacheCmdcacheadd_local498642501/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cache add minikube-local-cache-test:functional-577059
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 cache add minikube-local-cache-test:functional-577059: (1.638310635s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cache delete minikube-local-cache-test:functional-577059
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-577059
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.267631ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 kubectl -- --context functional-577059 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-577059 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-577059 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-577059 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.514497899s)
functional_test.go:757: restart took 32.51463361s for "functional-577059" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-577059 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 logs: (1.346275091s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 logs --file /tmp/TestFunctionalserialLogsFileCmd3608105260/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 logs --file /tmp/TestFunctionalserialLogsFileCmd3608105260/001/logs.txt: (1.382597466s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-577059 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-577059
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-577059: exit status 115 (288.058798ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.227:30907 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-577059 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 config get cpus: exit status 14 (60.427547ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 config get cpus: exit status 14 (48.399327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-577059 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-577059 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 134686: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-577059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-577059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (286.621027ms)

                                                
                                                
-- stdout --
	* [functional-577059] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:29:45.253469  134579 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:29:45.253722  134579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:29:45.253731  134579 out.go:304] Setting ErrFile to fd 2...
	I0729 11:29:45.253736  134579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:29:45.253957  134579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:29:45.254514  134579 out.go:298] Setting JSON to false
	I0729 11:29:45.255453  134579 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4336,"bootTime":1722248249,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:29:45.255521  134579 start.go:139] virtualization: kvm guest
	I0729 11:29:45.398198  134579 out.go:177] * [functional-577059] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 11:29:45.399755  134579 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 11:29:45.399790  134579 notify.go:220] Checking for updates...
	I0729 11:29:45.402460  134579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:29:45.403653  134579 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:29:45.404988  134579 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:29:45.406310  134579 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:29:45.408017  134579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:29:45.409755  134579 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:29:45.410306  134579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:29:45.410368  134579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:29:45.425996  134579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39035
	I0729 11:29:45.426493  134579 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:29:45.427123  134579 main.go:141] libmachine: Using API Version  1
	I0729 11:29:45.427165  134579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:29:45.427674  134579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:29:45.427900  134579 main.go:141] libmachine: (functional-577059) Calling .DriverName
	I0729 11:29:45.428238  134579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:29:45.428615  134579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:29:45.428661  134579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:29:45.444804  134579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35919
	I0729 11:29:45.445473  134579 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:29:45.446194  134579 main.go:141] libmachine: Using API Version  1
	I0729 11:29:45.446235  134579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:29:45.446590  134579 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:29:45.446802  134579 main.go:141] libmachine: (functional-577059) Calling .DriverName
	I0729 11:29:45.483547  134579 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 11:29:45.485002  134579 start.go:297] selected driver: kvm2
	I0729 11:29:45.485019  134579 start.go:901] validating driver "kvm2" against &{Name:functional-577059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-577059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:29:45.485173  134579 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:29:45.487942  134579 out.go:177] 
	W0729 11:29:45.489363  134579 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 11:29:45.490635  134579 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-577059 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-577059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-577059 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.628486ms)

                                                
                                                
-- stdout --
	* [functional-577059] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:29:38.905071  134251 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:29:38.905166  134251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:29:38.905173  134251 out.go:304] Setting ErrFile to fd 2...
	I0729 11:29:38.905177  134251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:29:38.905440  134251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 11:29:38.905960  134251 out.go:298] Setting JSON to false
	I0729 11:29:38.906875  134251 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4330,"bootTime":1722248249,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 11:29:38.906934  134251 start.go:139] virtualization: kvm guest
	I0729 11:29:38.909097  134251 out.go:177] * [functional-577059] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 11:29:38.910415  134251 out.go:177]   - MINIKUBE_LOCATION=19336
	I0729 11:29:38.910417  134251 notify.go:220] Checking for updates...
	I0729 11:29:38.912746  134251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:29:38.913985  134251 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	I0729 11:29:38.915207  134251 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	I0729 11:29:38.916380  134251 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 11:29:38.917588  134251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:29:38.919260  134251 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 11:29:38.919874  134251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:29:38.919940  134251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:29:38.935617  134251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36371
	I0729 11:29:38.936020  134251 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:29:38.936717  134251 main.go:141] libmachine: Using API Version  1
	I0729 11:29:38.936753  134251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:29:38.937144  134251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:29:38.937342  134251 main.go:141] libmachine: (functional-577059) Calling .DriverName
	I0729 11:29:38.937589  134251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:29:38.937880  134251 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 11:29:38.937922  134251 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 11:29:38.953145  134251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I0729 11:29:38.953665  134251 main.go:141] libmachine: () Calling .GetVersion
	I0729 11:29:38.954251  134251 main.go:141] libmachine: Using API Version  1
	I0729 11:29:38.954272  134251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 11:29:38.954584  134251 main.go:141] libmachine: () Calling .GetMachineName
	I0729 11:29:38.954815  134251 main.go:141] libmachine: (functional-577059) Calling .DriverName
	I0729 11:29:38.989072  134251 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 11:29:38.990345  134251 start.go:297] selected driver: kvm2
	I0729 11:29:38.990365  134251 start.go:901] validating driver "kvm2" against &{Name:functional-577059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-577059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:29:38.990521  134251 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:29:38.992611  134251 out.go:177] 
	W0729 11:29:38.993838  134251 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 11:29:38.994948  134251 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-577059 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-577059 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-9wssv" [8098f369-2cf4-412e-977c-b3457dde24cc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-9wssv" [8098f369-2cf4-412e-977c-b3457dde24cc] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.004128996s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.227:32700
functional_test.go:1671: http://192.168.39.227:32700: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-9wssv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.227:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.227:32700
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a3484e97-20a7-424f-81cb-946bc548c14d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004229238s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-577059 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-577059 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-577059 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-577059 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-577059 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e5f05a73-1f7e-4ae3-9dd3-9c248844590b] Pending
helpers_test.go:344: "sp-pod" [e5f05a73-1f7e-4ae3-9dd3-9c248844590b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e5f05a73-1f7e-4ae3-9dd3-9c248844590b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.052613528s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-577059 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-577059 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-577059 delete -f testdata/storage-provisioner/pod.yaml: (1.666697298s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-577059 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [651ad994-65b7-4447-bf71-4d0ff35d8991] Pending
helpers_test.go:344: "sp-pod" [651ad994-65b7-4447-bf71-4d0ff35d8991] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/07/29 11:30:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [651ad994-65b7-4447-bf71-4d0ff35d8991] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005484973s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-577059 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh -n functional-577059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cp functional-577059:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd159752563/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh -n functional-577059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh -n functional-577059 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-577059 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-p9jrf" [b3569dee-227a-4da3-936c-5488731bfe45] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-p9jrf" [b3569dee-227a-4da3-936c-5488731bfe45] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.259232026s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-577059 exec mysql-64454c8b5c-p9jrf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-577059 exec mysql-64454c8b5c-p9jrf -- mysql -ppassword -e "show databases;": exit status 1 (167.435338ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-577059 exec mysql-64454c8b5c-p9jrf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-577059 exec mysql-64454c8b5c-p9jrf -- mysql -ppassword -e "show databases;": exit status 1 (155.199049ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-577059 exec mysql-64454c8b5c-p9jrf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/120963/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo cat /etc/test/nested/copy/120963/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/120963.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo cat /etc/ssl/certs/120963.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/120963.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo cat /usr/share/ca-certificates/120963.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1209632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo cat /etc/ssl/certs/1209632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1209632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo cat /usr/share/ca-certificates/1209632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-577059 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh "sudo systemctl is-active docker": exit status 1 (278.436189ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh "sudo systemctl is-active containerd": exit status 1 (226.795682ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-577059 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-577059 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-577059 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-577059 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 132942: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-577059 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-577059
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-577059
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-577059 image ls --format short --alsologtostderr:
I0729 11:30:03.026456  135358 out.go:291] Setting OutFile to fd 1 ...
I0729 11:30:03.026585  135358 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:03.026595  135358 out.go:304] Setting ErrFile to fd 2...
I0729 11:30:03.026601  135358 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:03.026804  135358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
I0729 11:30:03.027370  135358 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:03.027492  135358 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:03.027913  135358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:03.027972  135358 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:03.043695  135358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
I0729 11:30:03.044161  135358 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:03.044705  135358 main.go:141] libmachine: Using API Version  1
I0729 11:30:03.044744  135358 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:03.045107  135358 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:03.045331  135358 main.go:141] libmachine: (functional-577059) Calling .GetState
I0729 11:30:03.047503  135358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:03.047565  135358 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:03.063092  135358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43745
I0729 11:30:03.063561  135358 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:03.064107  135358 main.go:141] libmachine: Using API Version  1
I0729 11:30:03.064144  135358 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:03.064539  135358 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:03.064754  135358 main.go:141] libmachine: (functional-577059) Calling .DriverName
I0729 11:30:03.065028  135358 ssh_runner.go:195] Run: systemctl --version
I0729 11:30:03.065055  135358 main.go:141] libmachine: (functional-577059) Calling .GetSSHHostname
I0729 11:30:03.068142  135358 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:03.068570  135358 main.go:141] libmachine: (functional-577059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:7e:5b", ip: ""} in network mk-functional-577059: {Iface:virbr1 ExpiryTime:2024-07-29 12:27:07 +0000 UTC Type:0 Mac:52:54:00:d6:7e:5b Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-577059 Clientid:01:52:54:00:d6:7e:5b}
I0729 11:30:03.068598  135358 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined IP address 192.168.39.227 and MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:03.068759  135358 main.go:141] libmachine: (functional-577059) Calling .GetSSHPort
I0729 11:30:03.068920  135358 main.go:141] libmachine: (functional-577059) Calling .GetSSHKeyPath
I0729 11:30:03.069101  135358 main.go:141] libmachine: (functional-577059) Calling .GetSSHUsername
I0729 11:30:03.069261  135358 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/functional-577059/id_rsa Username:docker}
I0729 11:30:03.151018  135358 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 11:30:03.189091  135358 main.go:141] libmachine: Making call to close driver server
I0729 11:30:03.189105  135358 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:03.189377  135358 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:03.189393  135358 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 11:30:03.189423  135358 main.go:141] libmachine: Making call to close driver server
I0729 11:30:03.189443  135358 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:03.189463  135358 main.go:141] libmachine: (functional-577059) DBG | Closing plugin on server side
I0729 11:30:03.189747  135358 main.go:141] libmachine: (functional-577059) DBG | Closing plugin on server side
I0729 11:30:03.189750  135358 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:03.189771  135358 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-577059 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server           | functional-577059  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | alpine             | 1ae23480369fa | 45.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-577059  | 31a78963da70a | 3.33kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| localhost/my-image                      | functional-577059  | 41adc2c27c0d9 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-577059 image ls --format table --alsologtostderr:
I0729 11:30:07.488858  135525 out.go:291] Setting OutFile to fd 1 ...
I0729 11:30:07.489021  135525 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:07.489033  135525 out.go:304] Setting ErrFile to fd 2...
I0729 11:30:07.489040  135525 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:07.489236  135525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
I0729 11:30:07.489790  135525 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:07.489906  135525 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:07.490296  135525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:07.490354  135525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:07.505444  135525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
I0729 11:30:07.505942  135525 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:07.506475  135525 main.go:141] libmachine: Using API Version  1
I0729 11:30:07.506499  135525 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:07.506889  135525 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:07.507077  135525 main.go:141] libmachine: (functional-577059) Calling .GetState
I0729 11:30:07.508845  135525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:07.508884  135525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:07.523781  135525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
I0729 11:30:07.524312  135525 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:07.524802  135525 main.go:141] libmachine: Using API Version  1
I0729 11:30:07.524830  135525 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:07.525191  135525 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:07.525338  135525 main.go:141] libmachine: (functional-577059) Calling .DriverName
I0729 11:30:07.525554  135525 ssh_runner.go:195] Run: systemctl --version
I0729 11:30:07.525594  135525 main.go:141] libmachine: (functional-577059) Calling .GetSSHHostname
I0729 11:30:07.528264  135525 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:07.528750  135525 main.go:141] libmachine: (functional-577059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:7e:5b", ip: ""} in network mk-functional-577059: {Iface:virbr1 ExpiryTime:2024-07-29 12:27:07 +0000 UTC Type:0 Mac:52:54:00:d6:7e:5b Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-577059 Clientid:01:52:54:00:d6:7e:5b}
I0729 11:30:07.528776  135525 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined IP address 192.168.39.227 and MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:07.529003  135525 main.go:141] libmachine: (functional-577059) Calling .GetSSHPort
I0729 11:30:07.529219  135525 main.go:141] libmachine: (functional-577059) Calling .GetSSHKeyPath
I0729 11:30:07.529409  135525 main.go:141] libmachine: (functional-577059) Calling .GetSSHUsername
I0729 11:30:07.529633  135525 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/functional-577059/id_rsa Username:docker}
I0729 11:30:07.611372  135525 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 11:30:07.648549  135525 main.go:141] libmachine: Making call to close driver server
I0729 11:30:07.648563  135525 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:07.648850  135525 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:07.648873  135525 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 11:30:07.648889  135525 main.go:141] libmachine: Making call to close driver server
I0729 11:30:07.648896  135525 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:07.649205  135525 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:07.649223  135525 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-577059 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"31a78963da70ac44783b93e7089e1fa39c50c0b769f2b89bd86db6d55314f100","repoDigests":["localhost/minikube-local-cache-test@sha256:6656c12ed3eaed74287ed461fd66aa03a4263681976161b3dd2d09afd23e9cf5"],"repoTags":["localhost/minikube-local-cache-test:functional-577059"],"size":"3330"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"3edc18e7b76722
eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-577059"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b815
6d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"41adc2c27c0d9947cac154ed01778ffb96a9fe19d369352c95cb19609f4094eb","repoDigests":["localhost/my-image@sha256:a360882e0a113ad7ddef2edc5b272b412ac3c5d8b0e517154db636986ec66eeb"],"repoTags":["localhost/my-image:functional-577059"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf85947596
9"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pa
use:latest"],"size":"247077"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"0ad5d8f2c7f65ad6dd65b1a8263d0bf5d9ed96fcc12af016dd4451905c003d2a","repoDigests":["docker.io/library/5e0d65371e5edea428ab7df3458171f094e858ef764923164867729cbbb6feef-tmp@sha256:6158cf6db31a4080e93197e71ec05ca947a2c7339237d7aff8b2f49550cbcef7"],"repoTags":[],"size":"1466018"},{"id
":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d2
9f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9","docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45068794"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b
6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-577059 image ls --format json --alsologtostderr:
I0729 11:30:07.275917  135501 out.go:291] Setting OutFile to fd 1 ...
I0729 11:30:07.276128  135501 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:07.276146  135501 out.go:304] Setting ErrFile to fd 2...
I0729 11:30:07.276156  135501 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:07.276727  135501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
I0729 11:30:07.277381  135501 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:07.277478  135501 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:07.277844  135501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:07.277912  135501 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:07.292831  135501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
I0729 11:30:07.293392  135501 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:07.293998  135501 main.go:141] libmachine: Using API Version  1
I0729 11:30:07.294025  135501 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:07.294334  135501 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:07.294540  135501 main.go:141] libmachine: (functional-577059) Calling .GetState
I0729 11:30:07.296288  135501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:07.296347  135501 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:07.311989  135501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
I0729 11:30:07.312485  135501 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:07.313029  135501 main.go:141] libmachine: Using API Version  1
I0729 11:30:07.313060  135501 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:07.313400  135501 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:07.313575  135501 main.go:141] libmachine: (functional-577059) Calling .DriverName
I0729 11:30:07.313791  135501 ssh_runner.go:195] Run: systemctl --version
I0729 11:30:07.313812  135501 main.go:141] libmachine: (functional-577059) Calling .GetSSHHostname
I0729 11:30:07.316334  135501 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:07.316720  135501 main.go:141] libmachine: (functional-577059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:7e:5b", ip: ""} in network mk-functional-577059: {Iface:virbr1 ExpiryTime:2024-07-29 12:27:07 +0000 UTC Type:0 Mac:52:54:00:d6:7e:5b Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-577059 Clientid:01:52:54:00:d6:7e:5b}
I0729 11:30:07.316752  135501 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined IP address 192.168.39.227 and MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:07.316925  135501 main.go:141] libmachine: (functional-577059) Calling .GetSSHPort
I0729 11:30:07.317115  135501 main.go:141] libmachine: (functional-577059) Calling .GetSSHKeyPath
I0729 11:30:07.317287  135501 main.go:141] libmachine: (functional-577059) Calling .GetSSHUsername
I0729 11:30:07.317444  135501 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/functional-577059/id_rsa Username:docker}
I0729 11:30:07.399595  135501 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 11:30:07.437230  135501 main.go:141] libmachine: Making call to close driver server
I0729 11:30:07.437243  135501 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:07.437613  135501 main.go:141] libmachine: (functional-577059) DBG | Closing plugin on server side
I0729 11:30:07.437620  135501 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:07.437648  135501 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 11:30:07.437656  135501 main.go:141] libmachine: Making call to close driver server
I0729 11:30:07.437666  135501 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:07.437905  135501 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:07.437912  135501 main.go:141] libmachine: (functional-577059) DBG | Closing plugin on server side
I0729 11:30:07.437923  135501 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-577059 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-577059
size: "4943877"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 31a78963da70ac44783b93e7089e1fa39c50c0b769f2b89bd86db6d55314f100
repoDigests:
- localhost/minikube-local-cache-test@sha256:6656c12ed3eaed74287ed461fd66aa03a4263681976161b3dd2d09afd23e9cf5
repoTags:
- localhost/minikube-local-cache-test:functional-577059
size: "3330"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57
repoTags:
- docker.io/library/nginx:alpine
size: "45068794"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-577059 image ls --format yaml --alsologtostderr:
I0729 11:30:03.237513  135382 out.go:291] Setting OutFile to fd 1 ...
I0729 11:30:03.237611  135382 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:03.237619  135382 out.go:304] Setting ErrFile to fd 2...
I0729 11:30:03.237623  135382 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:03.237795  135382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
I0729 11:30:03.238345  135382 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:03.238439  135382 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:03.238820  135382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:03.238859  135382 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:03.254508  135382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
I0729 11:30:03.254946  135382 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:03.255528  135382 main.go:141] libmachine: Using API Version  1
I0729 11:30:03.255557  135382 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:03.255986  135382 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:03.256198  135382 main.go:141] libmachine: (functional-577059) Calling .GetState
I0729 11:30:03.258301  135382 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:03.258351  135382 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:03.273449  135382 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
I0729 11:30:03.273940  135382 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:03.274452  135382 main.go:141] libmachine: Using API Version  1
I0729 11:30:03.274478  135382 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:03.274836  135382 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:03.275031  135382 main.go:141] libmachine: (functional-577059) Calling .DriverName
I0729 11:30:03.275286  135382 ssh_runner.go:195] Run: systemctl --version
I0729 11:30:03.275313  135382 main.go:141] libmachine: (functional-577059) Calling .GetSSHHostname
I0729 11:30:03.278106  135382 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:03.278505  135382 main.go:141] libmachine: (functional-577059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:7e:5b", ip: ""} in network mk-functional-577059: {Iface:virbr1 ExpiryTime:2024-07-29 12:27:07 +0000 UTC Type:0 Mac:52:54:00:d6:7e:5b Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-577059 Clientid:01:52:54:00:d6:7e:5b}
I0729 11:30:03.278531  135382 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined IP address 192.168.39.227 and MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:03.278746  135382 main.go:141] libmachine: (functional-577059) Calling .GetSSHPort
I0729 11:30:03.278972  135382 main.go:141] libmachine: (functional-577059) Calling .GetSSHKeyPath
I0729 11:30:03.279129  135382 main.go:141] libmachine: (functional-577059) Calling .GetSSHUsername
I0729 11:30:03.279279  135382 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/functional-577059/id_rsa Username:docker}
I0729 11:30:03.362203  135382 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 11:30:03.409021  135382 main.go:141] libmachine: Making call to close driver server
I0729 11:30:03.409036  135382 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:03.409332  135382 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:03.409353  135382 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 11:30:03.409368  135382 main.go:141] libmachine: Making call to close driver server
I0729 11:30:03.409377  135382 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:03.409668  135382 main.go:141] libmachine: (functional-577059) DBG | Closing plugin on server side
I0729 11:30:03.409674  135382 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:03.409706  135382 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh pgrep buildkitd: exit status 1 (217.657266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image build -t localhost/my-image:functional-577059 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 image build -t localhost/my-image:functional-577059 testdata/build --alsologtostderr: (3.342279114s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-577059 image build -t localhost/my-image:functional-577059 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0ad5d8f2c7f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-577059
--> 41adc2c27c0
Successfully tagged localhost/my-image:functional-577059
41adc2c27c0d9947cac154ed01778ffb96a9fe19d369352c95cb19609f4094eb
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-577059 image build -t localhost/my-image:functional-577059 testdata/build --alsologtostderr:
I0729 11:30:03.682629  135439 out.go:291] Setting OutFile to fd 1 ...
I0729 11:30:03.682773  135439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:03.682782  135439 out.go:304] Setting ErrFile to fd 2...
I0729 11:30:03.682786  135439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:30:03.683016  135439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
I0729 11:30:03.683696  135439 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:03.684670  135439 config.go:182] Loaded profile config "functional-577059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 11:30:03.685075  135439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:03.685123  135439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:03.700427  135439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
I0729 11:30:03.701038  135439 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:03.701587  135439 main.go:141] libmachine: Using API Version  1
I0729 11:30:03.701612  135439 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:03.702012  135439 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:03.702194  135439 main.go:141] libmachine: (functional-577059) Calling .GetState
I0729 11:30:03.704177  135439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 11:30:03.704218  135439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 11:30:03.720241  135439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39761
I0729 11:30:03.720699  135439 main.go:141] libmachine: () Calling .GetVersion
I0729 11:30:03.721222  135439 main.go:141] libmachine: Using API Version  1
I0729 11:30:03.721246  135439 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 11:30:03.721637  135439 main.go:141] libmachine: () Calling .GetMachineName
I0729 11:30:03.721818  135439 main.go:141] libmachine: (functional-577059) Calling .DriverName
I0729 11:30:03.722187  135439 ssh_runner.go:195] Run: systemctl --version
I0729 11:30:03.722214  135439 main.go:141] libmachine: (functional-577059) Calling .GetSSHHostname
I0729 11:30:03.725171  135439 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:03.725566  135439 main.go:141] libmachine: (functional-577059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:7e:5b", ip: ""} in network mk-functional-577059: {Iface:virbr1 ExpiryTime:2024-07-29 12:27:07 +0000 UTC Type:0 Mac:52:54:00:d6:7e:5b Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:functional-577059 Clientid:01:52:54:00:d6:7e:5b}
I0729 11:30:03.725591  135439 main.go:141] libmachine: (functional-577059) DBG | domain functional-577059 has defined IP address 192.168.39.227 and MAC address 52:54:00:d6:7e:5b in network mk-functional-577059
I0729 11:30:03.725796  135439 main.go:141] libmachine: (functional-577059) Calling .GetSSHPort
I0729 11:30:03.726017  135439 main.go:141] libmachine: (functional-577059) Calling .GetSSHKeyPath
I0729 11:30:03.726204  135439 main.go:141] libmachine: (functional-577059) Calling .GetSSHUsername
I0729 11:30:03.726388  135439 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/functional-577059/id_rsa Username:docker}
I0729 11:30:03.859559  135439 build_images.go:161] Building image from path: /tmp/build.408898942.tar
I0729 11:30:03.859638  135439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 11:30:03.875090  135439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.408898942.tar
I0729 11:30:03.880191  135439 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.408898942.tar: stat -c "%s %y" /var/lib/minikube/build/build.408898942.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.408898942.tar': No such file or directory
I0729 11:30:03.880252  135439 ssh_runner.go:362] scp /tmp/build.408898942.tar --> /var/lib/minikube/build/build.408898942.tar (3072 bytes)
I0729 11:30:03.909743  135439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.408898942
I0729 11:30:03.920287  135439 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.408898942 -xf /var/lib/minikube/build/build.408898942.tar
I0729 11:30:03.943469  135439 crio.go:315] Building image: /var/lib/minikube/build/build.408898942
I0729 11:30:03.943531  135439 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-577059 /var/lib/minikube/build/build.408898942 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 11:30:06.947636  135439 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-577059 /var/lib/minikube/build/build.408898942 --cgroup-manager=cgroupfs: (3.004084091s)
I0729 11:30:06.947698  135439 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.408898942
I0729 11:30:06.958974  135439 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.408898942.tar
I0729 11:30:06.970002  135439 build_images.go:217] Built localhost/my-image:functional-577059 from /tmp/build.408898942.tar
I0729 11:30:06.970053  135439 build_images.go:133] succeeded building to: functional-577059
I0729 11:30:06.970059  135439 build_images.go:134] failed building to: 
I0729 11:30:06.970083  135439 main.go:141] libmachine: Making call to close driver server
I0729 11:30:06.970098  135439 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:06.970378  135439 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:06.970395  135439 main.go:141] libmachine: (functional-577059) DBG | Closing plugin on server side
I0729 11:30:06.970404  135439 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 11:30:06.970415  135439 main.go:141] libmachine: Making call to close driver server
I0729 11:30:06.970424  135439 main.go:141] libmachine: (functional-577059) Calling .Close
I0729 11:30:06.970680  135439 main.go:141] libmachine: Successfully made call to close driver server
I0729 11:30:06.970707  135439 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 11:30:06.970710  135439 main.go:141] libmachine: (functional-577059) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.529203136s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-577059
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "283.045013ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "58.624686ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-577059 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-577059 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8c713a41-eaa1-437c-b144-6552865c0714] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8c713a41-eaa1-437c-b144-6552865c0714] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003921347s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "250.087603ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "53.131863ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image load --daemon docker.io/kicbase/echo-server:functional-577059 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 image load --daemon docker.io/kicbase/echo-server:functional-577059 --alsologtostderr: (1.085444464s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image load --daemon docker.io/kicbase/echo-server:functional-577059 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 image load --daemon docker.io/kicbase/echo-server:functional-577059 --alsologtostderr: (1.494202439s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-577059
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image load --daemon docker.io/kicbase/echo-server:functional-577059 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image save docker.io/kicbase/echo-server:functional-577059 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image rm docker.io/kicbase/echo-server:functional-577059 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-577059
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 image save --daemon docker.io/kicbase/echo-server:functional-577059 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 image save --daemon docker.io/kicbase/echo-server:functional-577059 --alsologtostderr: (7.470688749s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-577059
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-577059 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.64.8 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-577059 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdany-port2370692773/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722252579001607612" to /tmp/TestFunctionalparallelMountCmdany-port2370692773/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722252579001607612" to /tmp/TestFunctionalparallelMountCmdany-port2370692773/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722252579001607612" to /tmp/TestFunctionalparallelMountCmdany-port2370692773/001/test-1722252579001607612
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.076668ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 11:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 11:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 11:29 test-1722252579001607612
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh cat /mount-9p/test-1722252579001607612
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-577059 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ec8f9575-aad8-41b2-aed7-17cbbb0f80f9] Pending
helpers_test.go:344: "busybox-mount" [ec8f9575-aad8-41b2-aed7-17cbbb0f80f9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ec8f9575-aad8-41b2-aed7-17cbbb0f80f9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ec8f9575-aad8-41b2-aed7-17cbbb0f80f9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.004522566s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-577059 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdany-port2370692773/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdspecific-port1643123456/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.714426ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdspecific-port1643123456/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh "sudo umount -f /mount-9p": exit status 1 (196.199948ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-577059 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdspecific-port1643123456/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2772174380/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2772174380/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2772174380/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T" /mount1: exit status 1 (269.256707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-577059 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2772174380/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2772174380/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-577059 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2772174380/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-577059 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-577059 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-4jzfd" [2f9d54fc-8928-4d71-8be4-807e3fb736e4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-4jzfd" [2f9d54fc-8928-4d71-8be4-807e3fb736e4] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004002117s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 service list: (1.628305505s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-577059 service list -o json: (1.619791365s)
functional_test.go:1490: Took "1.619902428s" to run "out/minikube-linux-amd64 -p functional-577059 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.227:32001
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-577059 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.227:32001
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-577059
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-577059
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-577059
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-691698 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-691698 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m22.475750268s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-691698 -- rollout status deployment/busybox: (4.878733734s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-22qb4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-72n5l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-t69zw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-22qb4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-72n5l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-t69zw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-22qb4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-72n5l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-t69zw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-22qb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-22qb4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-72n5l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-72n5l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-t69zw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-691698 -- exec busybox-fc5497c4f-t69zw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-691698 -v=7 --alsologtostderr
E0729 11:34:27.393446  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:27.399241  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:27.409568  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:27.429887  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:27.470202  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:27.550579  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:27.711036  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:28.032213  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:28.672817  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:29.953022  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:32.514203  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:34:37.634407  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-691698 -v=7 --alsologtostderr: (55.078798895s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-691698 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp testdata/cp-test.txt ha-691698:/home/docker/cp-test.txt
E0729 11:34:47.874839  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698:/home/docker/cp-test.txt ha-691698-m02:/home/docker/cp-test_ha-691698_ha-691698-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test_ha-691698_ha-691698-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698:/home/docker/cp-test.txt ha-691698-m03:/home/docker/cp-test_ha-691698_ha-691698-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test_ha-691698_ha-691698-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698:/home/docker/cp-test.txt ha-691698-m04:/home/docker/cp-test_ha-691698_ha-691698-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test_ha-691698_ha-691698-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp testdata/cp-test.txt ha-691698-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m02:/home/docker/cp-test.txt ha-691698:/home/docker/cp-test_ha-691698-m02_ha-691698.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test_ha-691698-m02_ha-691698.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m02:/home/docker/cp-test.txt ha-691698-m03:/home/docker/cp-test_ha-691698-m02_ha-691698-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test_ha-691698-m02_ha-691698-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m02:/home/docker/cp-test.txt ha-691698-m04:/home/docker/cp-test_ha-691698-m02_ha-691698-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test_ha-691698-m02_ha-691698-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp testdata/cp-test.txt ha-691698-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt ha-691698:/home/docker/cp-test_ha-691698-m03_ha-691698.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test_ha-691698-m03_ha-691698.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt ha-691698-m02:/home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test_ha-691698-m03_ha-691698-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m03:/home/docker/cp-test.txt ha-691698-m04:/home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test_ha-691698-m03_ha-691698-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp testdata/cp-test.txt ha-691698-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1858176500/001/cp-test_ha-691698-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt ha-691698:/home/docker/cp-test_ha-691698-m04_ha-691698.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698 "sudo cat /home/docker/cp-test_ha-691698-m04_ha-691698.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt ha-691698-m02:/home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m02 "sudo cat /home/docker/cp-test_ha-691698-m04_ha-691698-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 cp ha-691698-m04:/home/docker/cp-test.txt ha-691698-m03:/home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 ssh -n ha-691698-m03 "sudo cat /home/docker/cp-test_ha-691698-m04_ha-691698-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.484894052s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-691698 node delete m03 -v=7 --alsologtostderr: (16.359715406s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (343.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-691698 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 11:49:27.393240  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
E0729 11:50:50.440605  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-691698 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m42.762031644s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (343.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-691698 --control-plane -v=7 --alsologtostderr
E0729 11:54:27.393918  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-691698 --control-plane -v=7 --alsologtostderr: (1m10.813375615s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-691698 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-321147 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-321147 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (54.151412724s)
--- PASS: TestJSONOutput/start/Command (54.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-321147 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-321147 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-321147 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-321147 --output=json --user=testUser: (6.659614496s)
--- PASS: TestJSONOutput/stop/Command (6.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-650842 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-650842 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.730965ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"734ac75a-19ac-4250-932d-ad7e168b0b11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-650842] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d91a825-c7d3-4604-b51e-b2f6c7475e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19336"}}
	{"specversion":"1.0","id":"9ce35b0f-efb5-4e43-a594-5f66a568c55d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"53c583a2-117d-4e5b-bc93-8aa375305cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig"}}
	{"specversion":"1.0","id":"3e28d6c7-b1d5-4eb2-99e4-802c07a0d2d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube"}}
	{"specversion":"1.0","id":"46b798c6-29b1-4758-b1d1-df99f1fa31eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8817d508-0212-4465-81a4-87789ccf005a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b4297574-3b9b-472e-a671-ddc3ca79050e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-650842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-650842
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-613901 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-613901 --driver=kvm2  --container-runtime=crio: (38.967523062s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-616354 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-616354 --driver=kvm2  --container-runtime=crio: (43.149500763s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-613901
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-616354
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-616354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-616354
helpers_test.go:175: Cleaning up "first-613901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-613901
--- PASS: TestMinikubeProfile (84.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-536283 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-536283 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.030833152s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-536283 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-536283 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-550100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-550100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.281186831s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-550100 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-550100 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-536283 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-550100 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-550100 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-550100
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-550100: (1.274912058s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-550100
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-550100: (22.278262109s)
--- PASS: TestMountStart/serial/RestartStopped (23.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-550100 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-550100 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-293807 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 11:59:27.394214  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-293807 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.861253231s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-293807 -- rollout status deployment/busybox: (2.644896031s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-tzhl8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-xzbtt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-tzhl8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-xzbtt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-tzhl8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-xzbtt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-tzhl8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-tzhl8 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-xzbtt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-293807 -- exec busybox-fc5497c4f-xzbtt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-293807 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-293807 -v 3 --alsologtostderr: (45.217908461s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-293807 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp testdata/cp-test.txt multinode-293807:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1050760835/001/cp-test_multinode-293807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807:/home/docker/cp-test.txt multinode-293807-m02:/home/docker/cp-test_multinode-293807_multinode-293807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m02 "sudo cat /home/docker/cp-test_multinode-293807_multinode-293807-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807:/home/docker/cp-test.txt multinode-293807-m03:/home/docker/cp-test_multinode-293807_multinode-293807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m03 "sudo cat /home/docker/cp-test_multinode-293807_multinode-293807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp testdata/cp-test.txt multinode-293807-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1050760835/001/cp-test_multinode-293807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt multinode-293807:/home/docker/cp-test_multinode-293807-m02_multinode-293807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807 "sudo cat /home/docker/cp-test_multinode-293807-m02_multinode-293807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807-m02:/home/docker/cp-test.txt multinode-293807-m03:/home/docker/cp-test_multinode-293807-m02_multinode-293807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m03 "sudo cat /home/docker/cp-test_multinode-293807-m02_multinode-293807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp testdata/cp-test.txt multinode-293807-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1050760835/001/cp-test_multinode-293807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt multinode-293807:/home/docker/cp-test_multinode-293807-m03_multinode-293807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807 "sudo cat /home/docker/cp-test_multinode-293807-m03_multinode-293807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 cp multinode-293807-m03:/home/docker/cp-test.txt multinode-293807-m02:/home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 ssh -n multinode-293807-m02 "sudo cat /home/docker/cp-test_multinode-293807-m03_multinode-293807-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-293807 node stop m03: (1.370696377s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-293807 status: exit status 7 (434.864305ms)

                                                
                                                
-- stdout --
	multinode-293807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-293807-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-293807-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-293807 status --alsologtostderr: exit status 7 (438.199678ms)

                                                
                                                
-- stdout --
	multinode-293807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-293807-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-293807-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:01:37.094198  153018 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:01:37.094449  153018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:01:37.094457  153018 out.go:304] Setting ErrFile to fd 2...
	I0729 12:01:37.094461  153018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:01:37.094658  153018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19336-113730/.minikube/bin
	I0729 12:01:37.094828  153018 out.go:298] Setting JSON to false
	I0729 12:01:37.094855  153018 mustload.go:65] Loading cluster: multinode-293807
	I0729 12:01:37.094960  153018 notify.go:220] Checking for updates...
	I0729 12:01:37.095269  153018 config.go:182] Loaded profile config "multinode-293807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 12:01:37.095288  153018 status.go:255] checking status of multinode-293807 ...
	I0729 12:01:37.095765  153018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:01:37.095827  153018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:01:37.113344  153018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0729 12:01:37.113862  153018 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:01:37.114470  153018 main.go:141] libmachine: Using API Version  1
	I0729 12:01:37.114511  153018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:01:37.114872  153018 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:01:37.115218  153018 main.go:141] libmachine: (multinode-293807) Calling .GetState
	I0729 12:01:37.116891  153018 status.go:330] multinode-293807 host status = "Running" (err=<nil>)
	I0729 12:01:37.116916  153018 host.go:66] Checking if "multinode-293807" exists ...
	I0729 12:01:37.117237  153018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:01:37.117280  153018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:01:37.133024  153018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46743
	I0729 12:01:37.133538  153018 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:01:37.134089  153018 main.go:141] libmachine: Using API Version  1
	I0729 12:01:37.134120  153018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:01:37.134456  153018 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:01:37.134680  153018 main.go:141] libmachine: (multinode-293807) Calling .GetIP
	I0729 12:01:37.137571  153018 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:01:37.138016  153018 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:01:37.138052  153018 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:01:37.138175  153018 host.go:66] Checking if "multinode-293807" exists ...
	I0729 12:01:37.138497  153018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:01:37.138532  153018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:01:37.155803  153018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0729 12:01:37.156260  153018 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:01:37.156862  153018 main.go:141] libmachine: Using API Version  1
	I0729 12:01:37.156916  153018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:01:37.157318  153018 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:01:37.157559  153018 main.go:141] libmachine: (multinode-293807) Calling .DriverName
	I0729 12:01:37.157822  153018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:01:37.157864  153018 main.go:141] libmachine: (multinode-293807) Calling .GetSSHHostname
	I0729 12:01:37.161288  153018 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:01:37.161729  153018 main.go:141] libmachine: (multinode-293807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:79:de", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 12:58:50 +0000 UTC Type:0 Mac:52:54:00:45:79:de Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-293807 Clientid:01:52:54:00:45:79:de}
	I0729 12:01:37.161776  153018 main.go:141] libmachine: (multinode-293807) DBG | domain multinode-293807 has defined IP address 192.168.39.26 and MAC address 52:54:00:45:79:de in network mk-multinode-293807
	I0729 12:01:37.162097  153018 main.go:141] libmachine: (multinode-293807) Calling .GetSSHPort
	I0729 12:01:37.162326  153018 main.go:141] libmachine: (multinode-293807) Calling .GetSSHKeyPath
	I0729 12:01:37.162491  153018 main.go:141] libmachine: (multinode-293807) Calling .GetSSHUsername
	I0729 12:01:37.162636  153018 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807/id_rsa Username:docker}
	I0729 12:01:37.249071  153018 ssh_runner.go:195] Run: systemctl --version
	I0729 12:01:37.255610  153018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:01:37.271436  153018 kubeconfig.go:125] found "multinode-293807" server: "https://192.168.39.26:8443"
	I0729 12:01:37.271484  153018 api_server.go:166] Checking apiserver status ...
	I0729 12:01:37.271530  153018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 12:01:37.285725  153018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup
	W0729 12:01:37.295744  153018 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 12:01:37.295820  153018 ssh_runner.go:195] Run: ls
	I0729 12:01:37.300125  153018 api_server.go:253] Checking apiserver healthz at https://192.168.39.26:8443/healthz ...
	I0729 12:01:37.305846  153018 api_server.go:279] https://192.168.39.26:8443/healthz returned 200:
	ok
	I0729 12:01:37.305880  153018 status.go:422] multinode-293807 apiserver status = Running (err=<nil>)
	I0729 12:01:37.305893  153018 status.go:257] multinode-293807 status: &{Name:multinode-293807 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:01:37.305917  153018 status.go:255] checking status of multinode-293807-m02 ...
	I0729 12:01:37.306232  153018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:01:37.306264  153018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:01:37.322434  153018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43847
	I0729 12:01:37.322907  153018 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:01:37.323350  153018 main.go:141] libmachine: Using API Version  1
	I0729 12:01:37.323372  153018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:01:37.323671  153018 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:01:37.323881  153018 main.go:141] libmachine: (multinode-293807-m02) Calling .GetState
	I0729 12:01:37.325488  153018 status.go:330] multinode-293807-m02 host status = "Running" (err=<nil>)
	I0729 12:01:37.325508  153018 host.go:66] Checking if "multinode-293807-m02" exists ...
	I0729 12:01:37.325822  153018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:01:37.325867  153018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:01:37.342310  153018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0729 12:01:37.342791  153018 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:01:37.343309  153018 main.go:141] libmachine: Using API Version  1
	I0729 12:01:37.343331  153018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:01:37.343690  153018 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:01:37.343895  153018 main.go:141] libmachine: (multinode-293807-m02) Calling .GetIP
	I0729 12:01:37.346829  153018 main.go:141] libmachine: (multinode-293807-m02) DBG | domain multinode-293807-m02 has defined MAC address 52:54:00:84:57:65 in network mk-multinode-293807
	I0729 12:01:37.347248  153018 main.go:141] libmachine: (multinode-293807-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:57:65", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 13:00:04 +0000 UTC Type:0 Mac:52:54:00:84:57:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-293807-m02 Clientid:01:52:54:00:84:57:65}
	I0729 12:01:37.347285  153018 main.go:141] libmachine: (multinode-293807-m02) DBG | domain multinode-293807-m02 has defined IP address 192.168.39.54 and MAC address 52:54:00:84:57:65 in network mk-multinode-293807
	I0729 12:01:37.347503  153018 host.go:66] Checking if "multinode-293807-m02" exists ...
	I0729 12:01:37.347858  153018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:01:37.347902  153018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:01:37.364413  153018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0729 12:01:37.364853  153018 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:01:37.365340  153018 main.go:141] libmachine: Using API Version  1
	I0729 12:01:37.365364  153018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:01:37.365690  153018 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:01:37.365927  153018 main.go:141] libmachine: (multinode-293807-m02) Calling .DriverName
	I0729 12:01:37.366129  153018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:01:37.366157  153018 main.go:141] libmachine: (multinode-293807-m02) Calling .GetSSHHostname
	I0729 12:01:37.369106  153018 main.go:141] libmachine: (multinode-293807-m02) DBG | domain multinode-293807-m02 has defined MAC address 52:54:00:84:57:65 in network mk-multinode-293807
	I0729 12:01:37.369515  153018 main.go:141] libmachine: (multinode-293807-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:57:65", ip: ""} in network mk-multinode-293807: {Iface:virbr1 ExpiryTime:2024-07-29 13:00:04 +0000 UTC Type:0 Mac:52:54:00:84:57:65 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-293807-m02 Clientid:01:52:54:00:84:57:65}
	I0729 12:01:37.369544  153018 main.go:141] libmachine: (multinode-293807-m02) DBG | domain multinode-293807-m02 has defined IP address 192.168.39.54 and MAC address 52:54:00:84:57:65 in network mk-multinode-293807
	I0729 12:01:37.369697  153018 main.go:141] libmachine: (multinode-293807-m02) Calling .GetSSHPort
	I0729 12:01:37.369913  153018 main.go:141] libmachine: (multinode-293807-m02) Calling .GetSSHKeyPath
	I0729 12:01:37.370112  153018 main.go:141] libmachine: (multinode-293807-m02) Calling .GetSSHUsername
	I0729 12:01:37.370267  153018 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19336-113730/.minikube/machines/multinode-293807-m02/id_rsa Username:docker}
	I0729 12:01:37.452308  153018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 12:01:37.467569  153018 status.go:257] multinode-293807-m02 status: &{Name:multinode-293807-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 12:01:37.467624  153018 status.go:255] checking status of multinode-293807-m03 ...
	I0729 12:01:37.467963  153018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 12:01:37.467995  153018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 12:01:37.484249  153018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0729 12:01:37.484697  153018 main.go:141] libmachine: () Calling .GetVersion
	I0729 12:01:37.485225  153018 main.go:141] libmachine: Using API Version  1
	I0729 12:01:37.485254  153018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 12:01:37.485605  153018 main.go:141] libmachine: () Calling .GetMachineName
	I0729 12:01:37.485819  153018 main.go:141] libmachine: (multinode-293807-m03) Calling .GetState
	I0729 12:01:37.487530  153018 status.go:330] multinode-293807-m03 host status = "Stopped" (err=<nil>)
	I0729 12:01:37.487547  153018 status.go:343] host is not running, skipping remaining checks
	I0729 12:01:37.487555  153018 status.go:257] multinode-293807-m03 status: &{Name:multinode-293807-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-293807 node start m03 -v=7 --alsologtostderr: (37.969532378s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-293807 node delete m03: (1.782335355s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (206.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-293807 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-293807 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.826022864s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-293807 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (206.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-293807
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-293807-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-293807-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.100243ms)

                                                
                                                
-- stdout --
	* [multinode-293807-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-293807-m02' is duplicated with machine name 'multinode-293807-m02' in profile 'multinode-293807'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-293807-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-293807-m03 --driver=kvm2  --container-runtime=crio: (41.748883133s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-293807
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-293807: exit status 80 (206.140861ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-293807 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-293807-m03 already exists in multinode-293807-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-293807-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.85s)

                                                
                                    
x
+
TestScheduledStopUnix (110.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-221413 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-221413 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.946859595s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221413 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-221413 -n scheduled-stop-221413
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221413 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221413 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221413 -n scheduled-stop-221413
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-221413
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221413 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-221413
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-221413: exit status 7 (68.518517ms)

                                                
                                                
-- stdout --
	scheduled-stop-221413
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221413 -n scheduled-stop-221413
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221413 -n scheduled-stop-221413: exit status 7 (64.269053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-221413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-221413
--- PASS: TestScheduledStopUnix (110.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (142.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1049183019 start -p running-upgrade-661564 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1049183019 start -p running-upgrade-661564 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (48.71955161s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-661564 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-661564 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.527459559s)
helpers_test.go:175: Cleaning up "running-upgrade-661564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-661564
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-661564: (1.157669029s)
--- PASS: TestRunningBinaryUpgrade (142.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-390849 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-390849 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (94.516508ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-390849] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19336
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19336-113730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19336-113730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (116.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-390849 --driver=kvm2  --container-runtime=crio
E0729 12:19:27.394235  120963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19336-113730/.minikube/profiles/functional-577059/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-390849 --driver=kvm2  --container-runtime=crio: (1m55.864121549s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-390849 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (116.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (170.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.597801984 start -p stopped-upgrade-185676 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.597801984 start -p stopped-upgrade-185676 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m43.783178547s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.597801984 -p stopped-upgrade-185676 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.597801984 -p stopped-upgrade-185676 stop: (2.140163107s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-185676 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-185676 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.312158564s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (170.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-390849 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-390849 --no-kubernetes --driver=kvm2  --container-runtime=crio: (9.245462607s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-390849 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-390849 status -o json: exit status 2 (225.322716ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-390849","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-390849
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-390849: (1.078344592s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-390849 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-390849 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.578910618s)
--- PASS: TestNoKubernetes/serial/Start (44.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-390849 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-390849 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.494232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.60173195s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.073918845s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-390849
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-390849: (1.646330747s)
--- PASS: TestNoKubernetes/serial/Stop (1.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (34.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-390849 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-390849 --driver=kvm2  --container-runtime=crio: (34.523742328s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (34.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-390849 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-390849 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.984366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-185676
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-185676: (1.149584351s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (68.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-737279 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-737279 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m8.82476059s)
--- PASS: TestPause/serial/Start (68.82s)

                                                
                                    

Test skip (30/216)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard